2025-06-02 19:15:29.959265 | Job console starting 2025-06-02 19:15:29.993535 | Updating git repos 2025-06-02 19:15:30.058163 | Cloning repos into workspace 2025-06-02 19:15:30.244241 | Restoring repo states 2025-06-02 19:15:30.264493 | Merging changes 2025-06-02 19:15:30.264519 | Checking out repos 2025-06-02 19:15:30.503856 | Preparing playbooks 2025-06-02 19:15:31.167976 | Running Ansible setup 2025-06-02 19:15:35.454672 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-02 19:15:36.202592 | 2025-06-02 19:15:36.202757 | PLAY [Base pre] 2025-06-02 19:15:36.219884 | 2025-06-02 19:15:36.220039 | TASK [Setup log path fact] 2025-06-02 19:15:36.240427 | orchestrator | ok 2025-06-02 19:15:36.258396 | 2025-06-02 19:15:36.258536 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 19:15:36.311492 | orchestrator | ok 2025-06-02 19:15:36.325078 | 2025-06-02 19:15:36.325227 | TASK [emit-job-header : Print job information] 2025-06-02 19:15:36.367044 | # Job Information 2025-06-02 19:15:36.367282 | Ansible Version: 2.16.14 2025-06-02 19:15:36.367327 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-06-02 19:15:36.367371 | Pipeline: post 2025-06-02 19:15:36.367401 | Executor: 521e9411259a 2025-06-02 19:15:36.367428 | Triggered by: https://github.com/osism/testbed/commit/ac006a7fea378f1b38fc889be9ab54b480327f41 2025-06-02 19:15:36.367458 | Event ID: eee0cf1c-3fe5-11f0-8b84-2af3573f70ae 2025-06-02 19:15:36.375799 | 2025-06-02 19:15:36.375939 | LOOP [emit-job-header : Print node information] 2025-06-02 19:15:36.495892 | orchestrator | ok: 2025-06-02 19:15:36.496192 | orchestrator | # Node Information 2025-06-02 19:15:36.496240 | orchestrator | Inventory Hostname: orchestrator 2025-06-02 19:15:36.496272 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-02 19:15:36.496302 | orchestrator | Username: zuul-testbed05 2025-06-02 19:15:36.496330 | orchestrator | Distro: Debian 12.11 2025-06-02 19:15:36.496362 | orchestrator | Provider: static-testbed 2025-06-02 19:15:36.496415 | orchestrator | Region: 2025-06-02 19:15:36.496444 | orchestrator | Label: testbed-orchestrator 2025-06-02 19:15:36.496472 | orchestrator | Product Name: OpenStack Nova 2025-06-02 19:15:36.496498 | orchestrator | Interface IP: 81.163.193.140 2025-06-02 19:15:36.519817 | 2025-06-02 19:15:36.519974 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-02 19:15:36.992508 | orchestrator -> localhost | changed 2025-06-02 19:15:37.001687 | 2025-06-02 19:15:37.001834 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-02 19:15:38.069066 | orchestrator -> localhost | changed 2025-06-02 19:15:38.096271 | 2025-06-02 19:15:38.096428 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-02 19:15:38.377653 | orchestrator -> localhost | ok 2025-06-02 19:15:38.392208 | 2025-06-02 19:15:38.392388 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-02 19:15:38.416669 | orchestrator | ok 2025-06-02 19:15:38.435832 | orchestrator | included: /var/lib/zuul/builds/99ae87a14d3a4e8eb8632c860e169cea/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-02 19:15:38.444039 | 2025-06-02 19:15:38.444172 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-02 19:15:40.247166 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-02 19:15:40.247521 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/99ae87a14d3a4e8eb8632c860e169cea/work/99ae87a14d3a4e8eb8632c860e169cea_id_rsa 2025-06-02 19:15:40.247588 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/99ae87a14d3a4e8eb8632c860e169cea/work/99ae87a14d3a4e8eb8632c860e169cea_id_rsa.pub 2025-06-02 19:15:40.247635 | orchestrator -> localhost | The key fingerprint is: 2025-06-02 19:15:40.247678 | orchestrator -> localhost | SHA256:np5zV5XzW3iBKsRN4WRl5+S/FPhiP9ny0Y5qX1KlNfk zuul-build-sshkey 2025-06-02 19:15:40.247718 | orchestrator -> localhost | The key's randomart image is: 2025-06-02 19:15:40.247769 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-02 19:15:40.247809 | orchestrator -> localhost | | +oo o | 2025-06-02 19:15:40.247849 | orchestrator -> localhost | | +.. * .| 2025-06-02 19:15:40.247886 | orchestrator -> localhost | | . o. ..*+| 2025-06-02 19:15:40.247921 | orchestrator -> localhost | | o . ..*B| 2025-06-02 19:15:40.247956 | orchestrator -> localhost | | S .o.=E| 2025-06-02 19:15:40.248002 | orchestrator -> localhost | | . o .. *.O| 2025-06-02 19:15:40.248038 | orchestrator -> localhost | | o . .oB*| 2025-06-02 19:15:40.248073 | orchestrator -> localhost | | .... o Oo| 2025-06-02 19:15:40.248108 | orchestrator -> localhost | | oo o.oo o| 2025-06-02 19:15:40.248161 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-02 19:15:40.248262 | orchestrator -> localhost | ok: Runtime: 0:00:01.291641 2025-06-02 19:15:40.259359 | 2025-06-02 19:15:40.259506 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-02 19:15:40.303712 | orchestrator | ok 2025-06-02 19:15:40.317100 | orchestrator | included: /var/lib/zuul/builds/99ae87a14d3a4e8eb8632c860e169cea/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-02 19:15:40.328957 | 2025-06-02 19:15:40.329090 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-02 19:15:40.354005 | orchestrator | skipping: Conditional result was False 2025-06-02 19:15:40.366377 | 2025-06-02 19:15:40.366530 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-02 19:15:40.967353 | orchestrator | changed 2025-06-02 19:15:40.980348 | 2025-06-02 19:15:40.980520 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-02 19:15:41.267090 | orchestrator | ok 2025-06-02 19:15:41.276852 | 2025-06-02 19:15:41.276992 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-02 19:15:41.959800 | orchestrator | ok 2025-06-02 19:15:41.966632 | 2025-06-02 19:15:41.966753 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-02 19:15:42.377207 | orchestrator | ok 2025-06-02 19:15:42.385572 | 2025-06-02 19:15:42.385693 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-02 19:15:42.411193 | orchestrator | skipping: Conditional result was False 2025-06-02 19:15:42.419279 | 2025-06-02 19:15:42.419394 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-02 19:15:42.858900 | orchestrator -> localhost | changed 2025-06-02 19:15:42.885907 | 2025-06-02 19:15:42.886057 | TASK [add-build-sshkey : Add back temp key] 2025-06-02 19:15:43.224714 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/99ae87a14d3a4e8eb8632c860e169cea/work/99ae87a14d3a4e8eb8632c860e169cea_id_rsa (zuul-build-sshkey) 2025-06-02 19:15:43.224957 | orchestrator -> localhost | ok: Runtime: 0:00:00.017360 2025-06-02 19:15:43.232251 | 2025-06-02 19:15:43.232355 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-02 19:15:43.665086 | orchestrator | ok 2025-06-02 19:15:43.673235 | 2025-06-02 19:15:43.673368 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-02 19:15:43.707902 | orchestrator | skipping: Conditional result was False 2025-06-02 19:15:43.779627 | 2025-06-02 19:15:43.779767 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-02 19:15:44.212189 | orchestrator | ok 2025-06-02 19:15:44.226225 | 2025-06-02 19:15:44.226349 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-02 19:15:44.260574 | orchestrator | ok 2025-06-02 19:15:44.281229 | 2025-06-02 19:15:44.282691 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-02 19:15:44.573441 | orchestrator -> localhost | ok 2025-06-02 19:15:44.585578 | 2025-06-02 19:15:44.585729 | TASK [validate-host : Collect information about the host] 2025-06-02 19:15:45.795335 | orchestrator | ok 2025-06-02 19:15:45.809483 | 2025-06-02 19:15:45.809641 | TASK [validate-host : Sanitize hostname] 2025-06-02 19:15:45.876016 | orchestrator | ok 2025-06-02 19:15:45.884412 | 2025-06-02 19:15:45.884555 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-02 19:15:46.437612 | orchestrator -> localhost | changed 2025-06-02 19:15:46.453357 | 2025-06-02 19:15:46.453505 | TASK [validate-host : Collect information about zuul worker] 2025-06-02 19:15:46.867661 | orchestrator | ok 2025-06-02 19:15:46.876705 | 2025-06-02 19:15:46.876866 | TASK [validate-host : Write out all zuul information for each host] 2025-06-02 19:15:47.429964 | orchestrator -> localhost | changed 2025-06-02 19:15:47.452684 | 2025-06-02 19:15:47.452868 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-02 19:15:47.737798 | orchestrator | ok 2025-06-02 19:15:47.755665 | 2025-06-02 19:15:47.755893 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-02 19:16:22.457720 | orchestrator | changed: 2025-06-02 19:16:22.458010 | orchestrator | .d..t...... src/ 2025-06-02 19:16:22.458063 | orchestrator | .d..t...... src/github.com/ 2025-06-02 19:16:22.458127 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-02 19:16:22.458163 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-02 19:16:22.458194 | orchestrator | RedHat.yml 2025-06-02 19:16:22.470984 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-02 19:16:22.471001 | orchestrator | RedHat.yml 2025-06-02 19:16:22.471053 | orchestrator | = 2.2.0"... 2025-06-02 19:16:36.647186 | orchestrator | 19:16:36.641 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-02 19:16:36.780421 | orchestrator | 19:16:36.780 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-06-02 19:16:38.149281 | orchestrator | 19:16:38.149 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-02 19:16:39.212562 | orchestrator | 19:16:39.212 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 19:16:40.779373 | orchestrator | 19:16:40.779 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-02 19:16:41.829230 | orchestrator | 19:16:41.828 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 19:16:43.294248 | orchestrator | 19:16:43.293 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-02 19:16:44.444204 | orchestrator | 19:16:44.444 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-02 19:16:44.444310 | orchestrator | 19:16:44.444 STDOUT terraform: Providers are signed by their developers. 2025-06-02 19:16:44.444321 | orchestrator | 19:16:44.444 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-02 19:16:44.444349 | orchestrator | 19:16:44.444 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-02 19:16:44.444465 | orchestrator | 19:16:44.444 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-02 19:16:44.444523 | orchestrator | 19:16:44.444 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-02 19:16:44.444572 | orchestrator | 19:16:44.444 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-02 19:16:44.444579 | orchestrator | 19:16:44.444 STDOUT terraform: you run "tofu init" in the future. 2025-06-02 19:16:44.445265 | orchestrator | 19:16:44.445 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-02 19:16:44.445402 | orchestrator | 19:16:44.445 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-02 19:16:44.445477 | orchestrator | 19:16:44.445 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-02 19:16:44.445484 | orchestrator | 19:16:44.445 STDOUT terraform: should now work. 2025-06-02 19:16:44.445533 | orchestrator | 19:16:44.445 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-02 19:16:44.445587 | orchestrator | 19:16:44.445 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-02 19:16:44.445634 | orchestrator | 19:16:44.445 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-02 19:16:44.666648 | orchestrator | 19:16:44.666 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-06-02 19:16:44.899091 | orchestrator | 19:16:44.898 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-02 19:16:44.899215 | orchestrator | 19:16:44.899 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-02 19:16:44.899228 | orchestrator | 19:16:44.899 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-02 19:16:44.899235 | orchestrator | 19:16:44.899 STDOUT terraform: for this configuration. 2025-06-02 19:16:45.110135 | orchestrator | 19:16:45.109 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-06-02 19:16:45.256606 | orchestrator | 19:16:45.256 STDOUT terraform: ci.auto.tfvars 2025-06-02 19:16:45.261566 | orchestrator | 19:16:45.261 STDOUT terraform: default_custom.tf 2025-06-02 19:16:45.490154 | orchestrator | 19:16:45.487 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-06-02 19:16:46.465643 | orchestrator | 19:16:46.465 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-02 19:16:46.954406 | orchestrator | 19:16:46.953 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-02 19:16:47.204024 | orchestrator | 19:16:47.199 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-02 19:16:47.204073 | orchestrator | 19:16:47.199 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-02 19:16:47.204079 | orchestrator | 19:16:47.199 STDOUT terraform:  + create 2025-06-02 19:16:47.204085 | orchestrator | 19:16:47.199 STDOUT terraform:  <= read (data resources) 2025-06-02 19:16:47.204090 | orchestrator | 19:16:47.199 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-02 19:16:47.204097 | orchestrator | 19:16:47.199 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-02 19:16:47.204100 | orchestrator | 19:16:47.199 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 19:16:47.204104 | orchestrator | 19:16:47.199 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-02 19:16:47.204108 | orchestrator | 19:16:47.199 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 19:16:47.204112 | orchestrator | 19:16:47.200 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 19:16:47.204116 | orchestrator | 19:16:47.200 STDOUT terraform:  + file = (known after apply) 2025-06-02 19:16:47.204119 | orchestrator | 19:16:47.200 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.204123 | orchestrator | 19:16:47.200 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.204127 | orchestrator | 19:16:47.200 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 19:16:47.204131 | orchestrator | 19:16:47.200 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 19:16:47.204135 | orchestrator | 19:16:47.200 STDOUT terraform:  + most_recent = true 2025-06-02 19:16:47.204145 | orchestrator | 19:16:47.200 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:47.204149 | orchestrator | 19:16:47.200 STDOUT terraform:  + protected = (known after apply) 2025-06-02 19:16:47.204153 | orchestrator | 19:16:47.200 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.204156 | orchestrator | 19:16:47.200 STDOUT terraform:  + schema = (known after apply) 2025-06-02 19:16:47.204160 | orchestrator | 19:16:47.200 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 19:16:47.204164 | orchestrator | 19:16:47.200 STDOUT terraform:  + tags = (known after apply) 2025-06-02 19:16:47.204168 | orchestrator | 19:16:47.200 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 19:16:47.204172 | orchestrator | 19:16:47.200 STDOUT terraform:  } 2025-06-02 19:16:47.204176 | orchestrator | 19:16:47.200 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-02 19:16:47.204180 | orchestrator | 19:16:47.200 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 19:16:47.204186 | orchestrator | 19:16:47.200 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-02 19:16:47.204190 | orchestrator | 19:16:47.200 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 19:16:47.204194 | orchestrator | 19:16:47.200 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 19:16:47.204198 | orchestrator | 19:16:47.200 STDOUT terraform:  + file = (known after apply) 2025-06-02 19:16:47.204202 | orchestrator | 19:16:47.200 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.204205 | orchestrator | 19:16:47.200 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.204209 | orchestrator | 19:16:47.200 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 19:16:47.204213 | orchestrator | 19:16:47.200 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 19:16:47.204216 | orchestrator | 19:16:47.200 STDOUT terraform:  + most_recent = true 2025-06-02 19:16:47.204220 | orchestrator | 19:16:47.200 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:47.204224 | orchestrator | 19:16:47.200 STDOUT terraform:  + protected = (known after apply) 2025-06-02 19:16:47.204227 | orchestrator | 19:16:47.200 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.204245 | orchestrator | 19:16:47.200 STDOUT terraform:  + schema = (known after apply) 2025-06-02 19:16:47.204253 | orchestrator | 19:16:47.200 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 19:16:47.204257 | orchestrator | 19:16:47.200 STDOUT terraform:  + tags = (known after apply) 2025-06-02 19:16:47.204261 | orchestrator | 19:16:47.200 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 19:16:47.204265 | orchestrator | 19:16:47.200 STDOUT terraform:  } 2025-06-02 19:16:47.204269 | orchestrator | 19:16:47.200 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-02 19:16:47.204272 | orchestrator | 19:16:47.200 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-02 19:16:47.204276 | orchestrator | 19:16:47.200 STDOUT terraform:  + content = (known after apply) 2025-06-02 19:16:47.204280 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 19:16:47.204287 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 19:16:47.204291 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 19:16:47.204295 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 19:16:47.204299 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 19:16:47.204302 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 19:16:47.204306 | orchestrator | 19:16:47.201 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 19:16:47.204310 | orchestrator | 19:16:47.201 STDOUT terraform:  + file_permission = "0644" 2025-06-02 19:16:47.204314 | orchestrator | 19:16:47.201 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-02 19:16:47.204317 | orchestrator | 19:16:47.201 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.204321 | orchestrator | 19:16:47.201 STDOUT terraform:  } 2025-06-02 19:16:47.204325 | orchestrator | 19:16:47.201 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-02 19:16:47.204329 | orchestrator | 19:16:47.201 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-02 19:16:47.204332 | orchestrator | 19:16:47.201 STDOUT terraform:  + content = (known after apply) 2025-06-02 19:16:47.204336 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 19:16:47.204339 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 19:16:47.204343 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 19:16:47.204347 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 19:16:47.204350 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 19:16:47.204354 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 19:16:47.204358 | orchestrator | 19:16:47.201 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 19:16:47.204361 | orchestrator | 19:16:47.201 STDOUT terraform:  + file_permission = "0644" 2025-06-02 19:16:47.204365 | orchestrator | 19:16:47.201 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-02 19:16:47.204369 | orchestrator | 19:16:47.201 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.204372 | orchestrator | 19:16:47.201 STDOUT terraform:  } 2025-06-02 19:16:47.204376 | orchestrator | 19:16:47.201 STDOUT terraform:  # local_file.inventory will be created 2025-06-02 19:16:47.204380 | orchestrator | 19:16:47.201 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-02 19:16:47.204384 | orchestrator | 19:16:47.201 STDOUT terraform:  + content = (known after apply) 2025-06-02 19:16:47.204387 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 19:16:47.204409 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 19:16:47.204419 | orchestrator | 19:16:47.201 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 19:16:47.204426 | orchestrator | 19:16:47.202 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 19:16:47.204429 | orchestrator | 19:16:47.202 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 19:16:47.204433 | orchestrator | 19:16:47.202 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 19:16:47.204437 | orchestrator | 19:16:47.202 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 19:16:47.204441 | orchestrator | 19:16:47.202 STDOUT terraform:  + file_permission = "0644" 2025-06-02 19:16:47.204444 | orchestrator | 19:16:47.202 STDOUT terraform:  + filename = "inventory.ci" 2025-06-02 19:16:47.204448 | orchestrator | 19:16:47.202 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.204455 | orchestrator | 19:16:47.202 STDOUT terraform:  } 2025-06-02 19:16:47.204458 | orchestrator | 19:16:47.202 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-02 19:16:47.204462 | orchestrator | 19:16:47.202 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-02 19:16:47.204466 | orchestrator | 19:16:47.202 STDOUT terraform:  + content = (sensitive value) 2025-06-02 19:16:47.204484 | orchestrator | 19:16:47.202 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 19:16:47.204488 | orchestrator | 19:16:47.202 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 19:16:47.204492 | orchestrator | 19:16:47.202 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 19:16:47.204496 | orchestrator | 19:16:47.202 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 19:16:47.204499 | orchestrator | 19:16:47.202 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 19:16:47.204503 | orchestrator | 19:16:47.202 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 19:16:47.204507 | orchestrator | 19:16:47.202 STDOUT terraform:  + directory_permission = "0700" 2025-06-02 19:16:47.204510 | orchestrator | 19:16:47.202 STDOUT terraform:  + file_permission = "0600" 2025-06-02 19:16:47.204514 | orchestrator | 19:16:47.202 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-02 19:16:47.204518 | orchestrator | 19:16:47.202 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.204521 | orchestrator | 19:16:47.202 STDOUT terraform:  } 2025-06-02 19:16:47.204525 | orchestrator | 19:16:47.202 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-02 19:16:47.204529 | orchestrator | 19:16:47.202 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-02 19:16:47.204533 | orchestrator | 19:16:47.202 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.204536 | orchestrator | 19:16:47.202 STDOUT terraform:  } 2025-06-02 19:16:47.204540 | orchestrator | 19:16:47.202 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-02 19:16:47.204544 | orchestrator | 19:16:47.202 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-02 19:16:47.204548 | orchestrator | 19:16:47.202 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.204554 | orchestrator | 19:16:47.202 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.204558 | orchestrator | 19:16:47.202 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.204562 | orchestrator | 19:16:47.202 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.204566 | orchestrator | 19:16:47.203 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.204569 | orchestrator | 19:16:47.203 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-02 19:16:47.204576 | orchestrator | 19:16:47.203 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.204579 | orchestrator | 19:16:47.203 STDOUT terraform:  + size = 80 2025-06-02 19:16:47.204586 | orchestrator | 19:16:47.203 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.204590 | orchestrator | 19:16:47.203 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.204593 | orchestrator | 19:16:47.203 STDOUT terraform:  } 2025-06-02 19:16:47.204597 | orchestrator | 19:16:47.203 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-02 19:16:47.204601 | orchestrator | 19:16:47.203 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:47.204605 | orchestrator | 19:16:47.203 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.204608 | orchestrator | 19:16:47.203 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.204612 | orchestrator | 19:16:47.203 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.204616 | orchestrator | 19:16:47.203 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.204619 | orchestrator | 19:16:47.203 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.204640 | orchestrator | 19:16:47.203 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-02 19:16:47.204644 | orchestrator | 19:16:47.203 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.204647 | orchestrator | 19:16:47.203 STDOUT terraform:  + size = 80 2025-06-02 19:16:47.204651 | orchestrator | 19:16:47.203 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.204858 | orchestrator | 19:16:47.203 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.204864 | orchestrator | 19:16:47.203 STDOUT terraform:  } 2025-06-02 19:16:47.204898 | orchestrator | 19:16:47.203 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-02 19:16:47.204902 | orchestrator | 19:16:47.203 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:47.204995 | orchestrator | 19:16:47.204 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.205034 | orchestrator | 19:16:47.205 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.205079 | orchestrator | 19:16:47.205 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.205122 | orchestrator | 19:16:47.205 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.205171 | orchestrator | 19:16:47.205 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.205243 | orchestrator | 19:16:47.205 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-02 19:16:47.205289 | orchestrator | 19:16:47.205 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.205327 | orchestrator | 19:16:47.205 STDOUT terraform:  + size = 80 2025-06-02 19:16:47.205366 | orchestrator | 19:16:47.205 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.205410 | orchestrator | 19:16:47.205 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.205431 | orchestrator | 19:16:47.205 STDOUT terraform:  } 2025-06-02 19:16:47.205486 | orchestrator | 19:16:47.205 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-02 19:16:47.205548 | orchestrator | 19:16:47.205 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:47.205593 | orchestrator | 19:16:47.205 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.205634 | orchestrator | 19:16:47.205 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.205685 | orchestrator | 19:16:47.205 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.205757 | orchestrator | 19:16:47.205 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.205810 | orchestrator | 19:16:47.205 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.205861 | orchestrator | 19:16:47.205 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-02 19:16:47.205911 | orchestrator | 19:16:47.205 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.205958 | orchestrator | 19:16:47.205 STDOUT terraform:  + size = 80 2025-06-02 19:16:47.205990 | orchestrator | 19:16:47.205 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.206036 | orchestrator | 19:16:47.205 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.206058 | orchestrator | 19:16:47.206 STDOUT terraform:  } 2025-06-02 19:16:47.206111 | orchestrator | 19:16:47.206 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-02 19:16:47.206163 | orchestrator | 19:16:47.206 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:47.206215 | orchestrator | 19:16:47.206 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.206266 | orchestrator | 19:16:47.206 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.206309 | orchestrator | 19:16:47.206 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.206358 | orchestrator | 19:16:47.206 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.206401 | orchestrator | 19:16:47.206 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.206451 | orchestrator | 19:16:47.206 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-02 19:16:47.206494 | orchestrator | 19:16:47.206 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.206534 | orchestrator | 19:16:47.206 STDOUT terraform:  + size = 80 2025-06-02 19:16:47.206565 | orchestrator | 19:16:47.206 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.206595 | orchestrator | 19:16:47.206 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.206615 | orchestrator | 19:16:47.206 STDOUT terraform:  } 2025-06-02 19:16:47.206683 | orchestrator | 19:16:47.206 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-02 19:16:47.206753 | orchestrator | 19:16:47.206 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:47.206798 | orchestrator | 19:16:47.206 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.206829 | orchestrator | 19:16:47.206 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.206872 | orchestrator | 19:16:47.206 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.206921 | orchestrator | 19:16:47.206 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.206965 | orchestrator | 19:16:47.206 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.207015 | orchestrator | 19:16:47.206 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-02 19:16:47.207058 | orchestrator | 19:16:47.207 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.207086 | orchestrator | 19:16:47.207 STDOUT terraform:  + size = 80 2025-06-02 19:16:47.207132 | orchestrator | 19:16:47.207 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.207171 | orchestrator | 19:16:47.207 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.207193 | orchestrator | 19:16:47.207 STDOUT terraform:  } 2025-06-02 19:16:47.207276 | orchestrator | 19:16:47.207 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-02 19:16:47.207379 | orchestrator | 19:16:47.207 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 19:16:47.209062 | orchestrator | 19:16:47.207 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.209123 | orchestrator | 19:16:47.209 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.209167 | orchestrator | 19:16:47.209 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.209225 | orchestrator | 19:16:47.209 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.209319 | orchestrator | 19:16:47.209 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.209391 | orchestrator | 19:16:47.209 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-02 19:16:47.209447 | orchestrator | 19:16:47.209 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.209476 | orchestrator | 19:16:47.209 STDOUT terraform:  + size = 80 2025-06-02 19:16:47.209514 | orchestrator | 19:16:47.209 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.209580 | orchestrator | 19:16:47.209 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.209633 | orchestrator | 19:16:47.209 STDOUT terraform:  } 2025-06-02 19:16:47.209695 | orchestrator | 19:16:47.209 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-02 19:16:47.210068 | orchestrator | 19:16:47.209 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:47.210164 | orchestrator | 19:16:47.210 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.210391 | orchestrator | 19:16:47.210 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.210448 | orchestrator | 19:16:47.210 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.210508 | orchestrator | 19:16:47.210 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.210564 | orchestrator | 19:16:47.210 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-02 19:16:47.210618 | orchestrator | 19:16:47.210 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.210646 | orchestrator | 19:16:47.210 STDOUT terraform:  + size = 20 2025-06-02 19:16:47.210685 | orchestrator | 19:16:47.210 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.210717 | orchestrator | 19:16:47.210 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.210749 | orchestrator | 19:16:47.210 STDOUT terraform:  } 2025-06-02 19:16:47.210802 | orchestrator | 19:16:47.210 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-02 19:16:47.210897 | orchestrator | 19:16:47.210 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:47.211195 | orchestrator | 19:16:47.210 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.211247 | orchestrator | 19:16:47.211 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.211305 | orchestrator | 19:16:47.211 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.211349 | orchestrator | 19:16:47.211 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.211409 | orchestrator | 19:16:47.211 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-02 19:16:47.211511 | orchestrator | 19:16:47.211 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.211572 | orchestrator | 19:16:47.211 STDOUT terraform:  + size = 20 2025-06-02 19:16:47.211606 | orchestrator | 19:16:47.211 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.211638 | orchestrator | 19:16:47.211 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.211659 | orchestrator | 19:16:47.211 STDOUT terraform:  } 2025-06-02 19:16:47.211711 | orchestrator | 19:16:47.211 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-02 19:16:47.211777 | orchestrator | 19:16:47.211 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:47.211831 | orchestrator | 19:16:47.211 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.211863 | orchestrator | 19:16:47.211 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.211914 | orchestrator | 19:16:47.211 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.211964 | orchestrator | 19:16:47.211 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.212016 | orchestrator | 19:16:47.211 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-02 19:16:47.212060 | orchestrator | 19:16:47.212 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.212095 | orchestrator | 19:16:47.212 STDOUT terraform:  + size = 20 2025-06-02 19:16:47.212133 | orchestrator | 19:16:47.212 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.212166 | orchestrator | 19:16:47.212 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.212193 | orchestrator | 19:16:47.212 STDOUT terraform:  } 2025-06-02 19:16:47.212246 | orchestrator | 19:16:47.212 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-02 19:16:47.212306 | orchestrator | 19:16:47.212 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:47.212349 | orchestrator | 19:16:47.212 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.212380 | orchestrator | 19:16:47.212 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.212431 | orchestrator | 19:16:47.212 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.212482 | orchestrator | 19:16:47.212 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.212529 | orchestrator | 19:16:47.212 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-02 19:16:47.212714 | orchestrator | 19:16:47.212 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.213009 | orchestrator | 19:16:47.212 STDOUT terraform:  + size = 20 2025-06-02 19:16:47.213074 | orchestrator | 19:16:47.213 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.213109 | orchestrator | 19:16:47.213 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.213150 | orchestrator | 19:16:47.213 STDOUT terraform:  } 2025-06-02 19:16:47.213216 | orchestrator | 19:16:47.213 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-02 19:16:47.213274 | orchestrator | 19:16:47.213 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:47.213316 | orchestrator | 19:16:47.213 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.213356 | orchestrator | 19:16:47.213 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.213408 | orchestrator | 19:16:47.213 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.213452 | orchestrator | 19:16:47.213 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.213503 | orchestrator | 19:16:47.213 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-02 19:16:47.213546 | orchestrator | 19:16:47.213 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.213573 | orchestrator | 19:16:47.213 STDOUT terraform:  + size = 20 2025-06-02 19:16:47.213604 | orchestrator | 19:16:47.213 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.213645 | orchestrator | 19:16:47.213 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.213673 | orchestrator | 19:16:47.213 STDOUT terraform:  } 2025-06-02 19:16:47.213772 | orchestrator | 19:16:47.213 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-02 19:16:47.213825 | orchestrator | 19:16:47.213 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:47.213898 | orchestrator | 19:16:47.213 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.213979 | orchestrator | 19:16:47.213 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.214038 | orchestrator | 19:16:47.213 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.214098 | orchestrator | 19:16:47.214 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.214157 | orchestrator | 19:16:47.214 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-02 19:16:47.214202 | orchestrator | 19:16:47.214 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.214240 | orchestrator | 19:16:47.214 STDOUT terraform:  + size = 20 2025-06-02 19:16:47.214273 | orchestrator | 19:16:47.214 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.214311 | orchestrator | 19:16:47.214 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.214333 | orchestrator | 19:16:47.214 STDOUT terraform:  } 2025-06-02 19:16:47.214390 | orchestrator | 19:16:47.214 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-02 19:16:47.214447 | orchestrator | 19:16:47.214 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:47.214490 | orchestrator | 19:16:47.214 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.214526 | orchestrator | 19:16:47.214 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.214664 | orchestrator | 19:16:47.214 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.214807 | orchestrator | 19:16:47.214 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.214874 | orchestrator | 19:16:47.214 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-02 19:16:47.214938 | orchestrator | 19:16:47.214 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.214968 | orchestrator | 19:16:47.214 STDOUT terraform:  + size = 20 2025-06-02 19:16:47.215034 | orchestrator | 19:16:47.215 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.215256 | orchestrator | 19:16:47.215 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.215290 | orchestrator | 19:16:47.215 STDOUT terraform:  } 2025-06-02 19:16:47.215472 | orchestrator | 19:16:47.215 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-02 19:16:47.215685 | orchestrator | 19:16:47.215 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:47.215736 | orchestrator | 19:16:47.215 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.215834 | orchestrator | 19:16:47.215 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.215941 | orchestrator | 19:16:47.215 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.216027 | orchestrator | 19:16:47.215 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.216091 | orchestrator | 19:16:47.216 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-02 19:16:47.218235 | orchestrator | 19:16:47.218 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.218316 | orchestrator | 19:16:47.218 STDOUT terraform:  + size = 20 2025-06-02 19:16:47.218504 | orchestrator | 19:16:47.218 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.218565 | orchestrator | 19:16:47.218 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.218593 | orchestrator | 19:16:47.218 STDOUT terraform:  } 2025-06-02 19:16:47.219477 | orchestrator | 19:16:47.218 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-02 19:16:47.219814 | orchestrator | 19:16:47.219 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 19:16:47.219880 | orchestrator | 19:16:47.219 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 19:16:47.219919 | orchestrator | 19:16:47.219 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.220482 | orchestrator | 19:16:47.219 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.220565 | orchestrator | 19:16:47.220 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 19:16:47.220617 | orchestrator | 19:16:47.220 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-02 19:16:47.220664 | orchestrator | 19:16:47.220 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.220712 | orchestrator | 19:16:47.220 STDOUT terraform:  + size = 20 2025-06-02 19:16:47.220772 | orchestrator | 19:16:47.220 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 19:16:47.220809 | orchestrator | 19:16:47.220 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 19:16:47.220833 | orchestrator | 19:16:47.220 STDOUT terraform:  } 2025-06-02 19:16:47.220895 | orchestrator | 19:16:47.220 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-02 19:16:47.220948 | orchestrator | 19:16:47.220 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-02 19:16:47.221005 | orchestrator | 19:16:47.220 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:47.221051 | orchestrator | 19:16:47.221 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:47.221199 | orchestrator | 19:16:47.221 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:47.221249 | orchestrator | 19:16:47.221 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.221281 | orchestrator | 19:16:47.221 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.221323 | orchestrator | 19:16:47.221 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:47.221369 | orchestrator | 19:16:47.221 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:47.221412 | orchestrator | 19:16:47.221 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:47.221461 | orchestrator | 19:16:47.221 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-02 19:16:47.221511 | orchestrator | 19:16:47.221 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:47.221555 | orchestrator | 19:16:47.221 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:47.221669 | orchestrator | 19:16:47.221 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.221721 | orchestrator | 19:16:47.221 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.223652 | orchestrator | 19:16:47.221 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:47.223728 | orchestrator | 19:16:47.223 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:47.223808 | orchestrator | 19:16:47.223 STDOUT terraform:  + name = "testbed-manager" 2025-06-02 19:16:47.223847 | orchestrator | 19:16:47.223 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:47.223896 | orchestrator | 19:16:47.223 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.223939 | orchestrator | 19:16:47.223 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:47.223971 | orchestrator | 19:16:47.223 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:47.224026 | orchestrator | 19:16:47.223 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:47.224116 | orchestrator | 19:16:47.224 STDOUT terraform:  + user_data = (known after apply) 2025-06-02 19:16:47.224165 | orchestrator | 19:16:47.224 STDOUT terraform:  + block_device { 2025-06-02 19:16:47.224238 | orchestrator | 19:16:47.224 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:47.224306 | orchestrator | 19:16:47.224 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:47.224347 | orchestrator | 19:16:47.224 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:47.224384 | orchestrator | 19:16:47.224 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:47.224423 | orchestrator | 19:16:47.224 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:47.224469 | orchestrator | 19:16:47.224 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.224493 | orchestrator | 19:16:47.224 STDOUT terraform:  } 2025-06-02 19:16:47.224516 | orchestrator | 19:16:47.224 STDOUT terraform:  + network { 2025-06-02 19:16:47.224551 | orchestrator | 19:16:47.224 STDOUT terraform:  + access_network = false 2025-06-02 19:16:47.224604 | orchestrator | 19:16:47.224 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:47.224644 | orchestrator | 19:16:47.224 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:47.224683 | orchestrator | 19:16:47.224 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:47.224721 | orchestrator | 19:16:47.224 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:47.224803 | orchestrator | 19:16:47.224 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:47.226094 | orchestrator | 19:16:47.224 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.226280 | orchestrator | 19:16:47.226 STDOUT terraform:  } 2025-06-02 19:16:47.226309 | orchestrator | 19:16:47.226 STDOUT terraform:  } 2025-06-02 19:16:47.226376 | orchestrator | 19:16:47.226 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-02 19:16:47.226428 | orchestrator | 19:16:47.226 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:47.228172 | orchestrator | 19:16:47.226 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:47.228396 | orchestrator | 19:16:47.228 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:47.235178 | orchestrator | 19:16:47.228 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:47.236734 | orchestrator | 19:16:47.235 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.236924 | orchestrator | 19:16:47.235 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.236934 | orchestrator | 19:16:47.235 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:47.236939 | orchestrator | 19:16:47.235 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:47.236943 | orchestrator | 19:16:47.235 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:47.236947 | orchestrator | 19:16:47.235 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:47.236981 | orchestrator | 19:16:47.235 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:47.236986 | orchestrator | 19:16:47.235 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:47.236990 | orchestrator | 19:16:47.235 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.236994 | orchestrator | 19:16:47.235 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.236998 | orchestrator | 19:16:47.235 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:47.237003 | orchestrator | 19:16:47.235 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:47.237007 | orchestrator | 19:16:47.235 STDOUT terraform:  + name = "testbed-node-0" 2025-06-02 19:16:47.237011 | orchestrator | 19:16:47.235 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:47.237024 | orchestrator | 19:16:47.235 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.237029 | orchestrator | 19:16:47.236 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:47.237033 | orchestrator | 19:16:47.236 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:47.237052 | orchestrator | 19:16:47.236 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:47.237056 | orchestrator | 19:16:47.236 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:47.237066 | orchestrator | 19:16:47.236 STDOUT terraform:  + block_device { 2025-06-02 19:16:47.237070 | orchestrator | 19:16:47.236 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:47.237074 | orchestrator | 19:16:47.236 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:47.237082 | orchestrator | 19:16:47.236 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:47.237092 | orchestrator | 19:16:47.236 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:47.237096 | orchestrator | 19:16:47.236 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:47.237099 | orchestrator | 19:16:47.236 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.237104 | orchestrator | 19:16:47.236 STDOUT terraform:  } 2025-06-02 19:16:47.237108 | orchestrator | 19:16:47.236 STDOUT terraform:  + network { 2025-06-02 19:16:47.237112 | orchestrator | 19:16:47.236 STDOUT terraform:  + access_network = false 2025-06-02 19:16:47.237115 | orchestrator | 19:16:47.236 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:47.237119 | orchestrator | 19:16:47.236 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:47.237123 | orchestrator | 19:16:47.236 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:47.237127 | orchestrator | 19:16:47.236 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:47.237130 | orchestrator | 19:16:47.236 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:47.237134 | orchestrator | 19:16:47.236 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.237391 | orchestrator | 19:16:47.236 STDOUT terraform:  } 2025-06-02 19:16:47.237397 | orchestrator | 19:16:47.236 STDOUT terraform:  } 2025-06-02 19:16:47.239716 | orchestrator | 19:16:47.236 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-02 19:16:47.239735 | orchestrator | 19:16:47.237 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:47.239750 | orchestrator | 19:16:47.237 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:47.239754 | orchestrator | 19:16:47.237 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:47.239758 | orchestrator | 19:16:47.237 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:47.239762 | orchestrator | 19:16:47.237 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.239783 | orchestrator | 19:16:47.237 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.239787 | orchestrator | 19:16:47.237 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:47.239791 | orchestrator | 19:16:47.237 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:47.239811 | orchestrator | 19:16:47.237 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:47.239815 | orchestrator | 19:16:47.237 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:47.239818 | orchestrator | 19:16:47.237 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:47.239829 | orchestrator | 19:16:47.237 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:47.239833 | orchestrator | 19:16:47.237 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.239837 | orchestrator | 19:16:47.238 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.239845 | orchestrator | 19:16:47.238 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:47.239857 | orchestrator | 19:16:47.238 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:47.239861 | orchestrator | 19:16:47.238 STDOUT terraform:  + name = "testbed-node-1" 2025-06-02 19:16:47.239864 | orchestrator | 19:16:47.238 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:47.239868 | orchestrator | 19:16:47.238 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.239888 | orchestrator | 19:16:47.238 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:47.239892 | orchestrator | 19:16:47.238 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:47.239911 | orchestrator | 19:16:47.238 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:47.239921 | orchestrator | 19:16:47.238 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:47.239925 | orchestrator | 19:16:47.238 STDOUT terraform:  + block_device { 2025-06-02 19:16:47.239929 | orchestrator | 19:16:47.238 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:47.239933 | orchestrator | 19:16:47.238 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:47.239937 | orchestrator | 19:16:47.238 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:47.239940 | orchestrator | 19:16:47.238 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:47.239950 | orchestrator | 19:16:47.238 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:47.239954 | orchestrator | 19:16:47.238 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.239958 | orchestrator | 19:16:47.238 STDOUT terraform:  } 2025-06-02 19:16:47.239961 | orchestrator | 19:16:47.238 STDOUT terraform:  + network { 2025-06-02 19:16:47.239965 | orchestrator | 19:16:47.238 STDOUT terraform:  + access_network = false 2025-06-02 19:16:47.239982 | orchestrator | 19:16:47.238 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:47.239987 | orchestrator | 19:16:47.238 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:47.239991 | orchestrator | 19:16:47.238 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:47.240002 | orchestrator | 19:16:47.238 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:47.240006 | orchestrator | 19:16:47.238 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:47.240009 | orchestrator | 19:16:47.238 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.240029 | orchestrator | 19:16:47.238 STDOUT terraform:  } 2025-06-02 19:16:47.240033 | orchestrator | 19:16:47.238 STDOUT terraform:  } 2025-06-02 19:16:47.240037 | orchestrator | 19:16:47.238 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-02 19:16:47.240041 | orchestrator | 19:16:47.238 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:47.240045 | orchestrator | 19:16:47.238 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:47.240071 | orchestrator | 19:16:47.239 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:47.240080 | orchestrator | 19:16:47.239 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:47.240085 | orchestrator | 19:16:47.239 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.240113 | orchestrator | 19:16:47.239 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.240118 | orchestrator | 19:16:47.239 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:47.240122 | orchestrator | 19:16:47.239 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:47.240126 | orchestrator | 19:16:47.239 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:47.240129 | orchestrator | 19:16:47.239 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:47.240133 | orchestrator | 19:16:47.239 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:47.240137 | orchestrator | 19:16:47.239 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:47.240141 | orchestrator | 19:16:47.239 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.240145 | orchestrator | 19:16:47.239 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.240148 | orchestrator | 19:16:47.239 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:47.240152 | orchestrator | 19:16:47.239 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:47.240156 | orchestrator | 19:16:47.239 STDOUT terraform:  + name = "testbed-node-2" 2025-06-02 19:16:47.240159 | orchestrator | 19:16:47.239 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:47.240163 | orchestrator | 19:16:47.239 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.240167 | orchestrator | 19:16:47.239 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:47.240186 | orchestrator | 19:16:47.239 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:47.240190 | orchestrator | 19:16:47.239 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:47.241076 | orchestrator | 19:16:47.239 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:47.241111 | orchestrator | 19:16:47.240 STDOUT terraform:  + block_device { 2025-06-02 19:16:47.241134 | orchestrator | 19:16:47.240 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:47.241139 | orchestrator | 19:16:47.240 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:47.241143 | orchestrator | 19:16:47.240 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:47.241147 | orchestrator | 19:16:47.240 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:47.241151 | orchestrator | 19:16:47.240 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:47.241155 | orchestrator | 19:16:47.240 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.241159 | orchestrator | 19:16:47.240 STDOUT terraform:  } 2025-06-02 19:16:47.241163 | orchestrator | 19:16:47.240 STDOUT terraform:  + network { 2025-06-02 19:16:47.241167 | orchestrator | 19:16:47.240 STDOUT terraform:  + access_network = false 2025-06-02 19:16:47.241177 | orchestrator | 19:16:47.240 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:47.241181 | orchestrator | 19:16:47.240 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:47.241185 | orchestrator | 19:16:47.240 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:47.241188 | orchestrator | 19:16:47.240 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:47.241192 | orchestrator | 19:16:47.240 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:47.241210 | orchestrator | 19:16:47.240 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.241259 | orchestrator | 19:16:47.240 STDOUT terraform:  } 2025-06-02 19:16:47.241321 | orchestrator | 19:16:47.241 STDOUT terraform:  } 2025-06-02 19:16:47.241433 | orchestrator | 19:16:47.241 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-02 19:16:47.241509 | orchestrator | 19:16:47.241 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:47.241579 | orchestrator | 19:16:47.241 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:47.241631 | orchestrator | 19:16:47.241 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:47.241684 | orchestrator | 19:16:47.241 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:47.241727 | orchestrator | 19:16:47.241 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.241781 | orchestrator | 19:16:47.241 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.241863 | orchestrator | 19:16:47.241 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:47.241938 | orchestrator | 19:16:47.241 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:47.241986 | orchestrator | 19:16:47.241 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:47.242337 | orchestrator | 19:16:47.241 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:47.242429 | orchestrator | 19:16:47.242 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:47.242485 | orchestrator | 19:16:47.242 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:47.242547 | orchestrator | 19:16:47.242 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.242595 | orchestrator | 19:16:47.242 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.242643 | orchestrator | 19:16:47.242 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:47.242677 | orchestrator | 19:16:47.242 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:47.242729 | orchestrator | 19:16:47.242 STDOUT terraform:  + name = "testbed-node-3" 2025-06-02 19:16:47.242812 | orchestrator | 19:16:47.242 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:47.242894 | orchestrator | 19:16:47.242 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.242987 | orchestrator | 19:16:47.242 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:47.243045 | orchestrator | 19:16:47.243 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:47.250768 | orchestrator | 19:16:47.243 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:47.250887 | orchestrator | 19:16:47.250 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:47.250923 | orchestrator | 19:16:47.250 STDOUT terraform:  + block_device { 2025-06-02 19:16:47.250962 | orchestrator | 19:16:47.250 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:47.251002 | orchestrator | 19:16:47.250 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:47.251042 | orchestrator | 19:16:47.251 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:47.251081 | orchestrator | 19:16:47.251 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:47.251121 | orchestrator | 19:16:47.251 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:47.251171 | orchestrator | 19:16:47.251 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.251195 | orchestrator | 19:16:47.251 STDOUT terraform:  } 2025-06-02 19:16:47.251220 | orchestrator | 19:16:47.251 STDOUT terraform:  + network { 2025-06-02 19:16:47.251251 | orchestrator | 19:16:47.251 STDOUT terraform:  + access_network = false 2025-06-02 19:16:47.251292 | orchestrator | 19:16:47.251 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:47.251331 | orchestrator | 19:16:47.251 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:47.251375 | orchestrator | 19:16:47.251 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:47.251414 | orchestrator | 19:16:47.251 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:47.251551 | orchestrator | 19:16:47.251 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:47.251992 | orchestrator | 19:16:47.251 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.252708 | orchestrator | 19:16:47.252 STDOUT terraform:  } 2025-06-02 19:16:47.252800 | orchestrator | 19:16:47.252 STDOUT terraform:  } 2025-06-02 19:16:47.252872 | orchestrator | 19:16:47.252 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-02 19:16:47.252931 | orchestrator | 19:16:47.252 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:47.252998 | orchestrator | 19:16:47.252 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:47.253060 | orchestrator | 19:16:47.253 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:47.253112 | orchestrator | 19:16:47.253 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:47.253174 | orchestrator | 19:16:47.253 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.253276 | orchestrator | 19:16:47.253 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.253928 | orchestrator | 19:16:47.253 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:47.254027 | orchestrator | 19:16:47.253 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:47.254082 | orchestrator | 19:16:47.254 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:47.254387 | orchestrator | 19:16:47.254 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:47.254437 | orchestrator | 19:16:47.254 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:47.254488 | orchestrator | 19:16:47.254 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:47.254535 | orchestrator | 19:16:47.254 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.254579 | orchestrator | 19:16:47.254 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.254623 | orchestrator | 19:16:47.254 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:47.254661 | orchestrator | 19:16:47.254 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:47.254705 | orchestrator | 19:16:47.254 STDOUT terraform:  + name = "testbed-node-4" 2025-06-02 19:16:47.254756 | orchestrator | 19:16:47.254 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:47.254879 | orchestrator | 19:16:47.254 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.254927 | orchestrator | 19:16:47.254 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:47.254986 | orchestrator | 19:16:47.254 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:47.255051 | orchestrator | 19:16:47.254 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:47.255118 | orchestrator | 19:16:47.255 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:47.255163 | orchestrator | 19:16:47.255 STDOUT terraform:  + block_device { 2025-06-02 19:16:47.255243 | orchestrator | 19:16:47.255 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:47.255302 | orchestrator | 19:16:47.255 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:47.255347 | orchestrator | 19:16:47.255 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:47.255472 | orchestrator | 19:16:47.255 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:47.255520 | orchestrator | 19:16:47.255 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:47.255570 | orchestrator | 19:16:47.255 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.255597 | orchestrator | 19:16:47.255 STDOUT terraform:  } 2025-06-02 19:16:47.255621 | orchestrator | 19:16:47.255 STDOUT terraform:  + network { 2025-06-02 19:16:47.255653 | orchestrator | 19:16:47.255 STDOUT terraform:  + access_network = false 2025-06-02 19:16:47.255695 | orchestrator | 19:16:47.255 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:47.256001 | orchestrator | 19:16:47.255 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:47.256059 | orchestrator | 19:16:47.256 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:47.256104 | orchestrator | 19:16:47.256 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:47.256147 | orchestrator | 19:16:47.256 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:47.256187 | orchestrator | 19:16:47.256 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.256218 | orchestrator | 19:16:47.256 STDOUT terraform:  } 2025-06-02 19:16:47.256246 | orchestrator | 19:16:47.256 STDOUT terraform:  } 2025-06-02 19:16:47.256384 | orchestrator | 19:16:47.256 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-02 19:16:47.256436 | orchestrator | 19:16:47.256 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 19:16:47.256493 | orchestrator | 19:16:47.256 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 19:16:47.256536 | orchestrator | 19:16:47.256 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 19:16:47.256578 | orchestrator | 19:16:47.256 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 19:16:47.256658 | orchestrator | 19:16:47.256 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.256713 | orchestrator | 19:16:47.256 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 19:16:47.256797 | orchestrator | 19:16:47.256 STDOUT terraform:  + config_drive = true 2025-06-02 19:16:47.256845 | orchestrator | 19:16:47.256 STDOUT terraform:  + created = (known after apply) 2025-06-02 19:16:47.256903 | orchestrator | 19:16:47.256 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 19:16:47.256943 | orchestrator | 19:16:47.256 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 19:16:47.256975 | orchestrator | 19:16:47.256 STDOUT terraform:  + force_delete = false 2025-06-02 19:16:47.257018 | orchestrator | 19:16:47.256 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 19:16:47.257062 | orchestrator | 19:16:47.257 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.257376 | orchestrator | 19:16:47.257 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 19:16:47.257431 | orchestrator | 19:16:47.257 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 19:16:47.257467 | orchestrator | 19:16:47.257 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 19:16:47.257505 | orchestrator | 19:16:47.257 STDOUT terraform:  + name = "testbed-node-5" 2025-06-02 19:16:47.257538 | orchestrator | 19:16:47.257 STDOUT terraform:  + power_state = "active" 2025-06-02 19:16:47.257583 | orchestrator | 19:16:47.257 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.257624 | orchestrator | 19:16:47.257 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 19:16:47.257657 | orchestrator | 19:16:47.257 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 19:16:47.257700 | orchestrator | 19:16:47.257 STDOUT terraform:  + updated = (known after apply) 2025-06-02 19:16:47.257773 | orchestrator | 19:16:47.257 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 19:16:47.257803 | orchestrator | 19:16:47.257 STDOUT terraform:  + block_device { 2025-06-02 19:16:47.257841 | orchestrator | 19:16:47.257 STDOUT terraform:  + boot_index = 0 2025-06-02 19:16:47.257880 | orchestrator | 19:16:47.257 STDOUT terraform:  + delete_on_termination = false 2025-06-02 19:16:47.257919 | orchestrator | 19:16:47.257 STDOUT terraform:  + destination_type = "volume" 2025-06-02 19:16:47.257964 | orchestrator | 19:16:47.257 STDOUT terraform:  + multiattach = false 2025-06-02 19:16:47.258005 | orchestrator | 19:16:47.257 STDOUT terraform:  + source_type = "volume" 2025-06-02 19:16:47.258077 | orchestrator | 19:16:47.258 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.258106 | orchestrator | 19:16:47.258 STDOUT terraform:  } 2025-06-02 19:16:47.258131 | orchestrator | 19:16:47.258 STDOUT terraform:  + network { 2025-06-02 19:16:47.258161 | orchestrator | 19:16:47.258 STDOUT terraform:  + access_network = false 2025-06-02 19:16:47.258204 | orchestrator | 19:16:47.258 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 19:16:47.258245 | orchestrator | 19:16:47.258 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 19:16:47.258286 | orchestrator | 19:16:47.258 STDOUT terraform:  + mac = (known after apply) 2025-06-02 19:16:47.258326 | orchestrator | 19:16:47.258 STDOUT terraform:  + name = (known after apply) 2025-06-02 19:16:47.258373 | orchestrator | 19:16:47.258 STDOUT terraform:  + port = (known after apply) 2025-06-02 19:16:47.258415 | orchestrator | 19:16:47.258 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 19:16:47.258438 | orchestrator | 19:16:47.258 STDOUT terraform:  } 2025-06-02 19:16:47.258462 | orchestrator | 19:16:47.258 STDOUT terraform:  } 2025-06-02 19:16:47.258506 | orchestrator | 19:16:47.258 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-02 19:16:47.258549 | orchestrator | 19:16:47.258 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-02 19:16:47.258589 | orchestrator | 19:16:47.258 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-02 19:16:47.258626 | orchestrator | 19:16:47.258 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.258674 | orchestrator | 19:16:47.258 STDOUT terraform:  + name = "testbed" 2025-06-02 19:16:47.258728 | orchestrator | 19:16:47.258 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 19:16:47.258806 | orchestrator | 19:16:47.258 STDOUT terraform:  + public_key = (known after apply) 2025-06-02 19:16:47.258862 | orchestrator | 19:16:47.258 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.259019 | orchestrator | 19:16:47.258 STDOUT terraform:  + user_id = (known after apply) 2025-06-02 19:16:47.259051 | orchestrator | 19:16:47.259 STDOUT terraform:  } 2025-06-02 19:16:47.259131 | orchestrator | 19:16:47.259 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-02 19:16:47.259408 | orchestrator | 19:16:47.259 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:47.259473 | orchestrator | 19:16:47.259 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:47.259522 | orchestrator | 19:16:47.259 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.259563 | orchestrator | 19:16:47.259 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:47.259603 | orchestrator | 19:16:47.259 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.259642 | orchestrator | 19:16:47.259 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:47.259672 | orchestrator | 19:16:47.259 STDOUT terraform:  } 2025-06-02 19:16:47.259731 | orchestrator | 19:16:47.259 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-02 19:16:47.259827 | orchestrator | 19:16:47.259 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:47.259866 | orchestrator | 19:16:47.259 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:47.259904 | orchestrator | 19:16:47.259 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.259942 | orchestrator | 19:16:47.259 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:47.259980 | orchestrator | 19:16:47.259 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.260020 | orchestrator | 19:16:47.259 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:47.260042 | orchestrator | 19:16:47.260 STDOUT terraform:  } 2025-06-02 19:16:47.260100 | orchestrator | 19:16:47.260 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-02 19:16:47.260157 | orchestrator | 19:16:47.260 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:47.260193 | orchestrator | 19:16:47.260 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:47.260231 | orchestrator | 19:16:47.260 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.260267 | orchestrator | 19:16:47.260 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:47.260304 | orchestrator | 19:16:47.260 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.260341 | orchestrator | 19:16:47.260 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:47.260363 | orchestrator | 19:16:47.260 STDOUT terraform:  } 2025-06-02 19:16:47.260419 | orchestrator | 19:16:47.260 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-02 19:16:47.260476 | orchestrator | 19:16:47.260 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:47.260514 | orchestrator | 19:16:47.260 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:47.260551 | orchestrator | 19:16:47.260 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.260590 | orchestrator | 19:16:47.260 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:47.260626 | orchestrator | 19:16:47.260 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.260662 | orchestrator | 19:16:47.260 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:47.260684 | orchestrator | 19:16:47.260 STDOUT terraform:  } 2025-06-02 19:16:47.260751 | orchestrator | 19:16:47.260 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-02 19:16:47.260808 | orchestrator | 19:16:47.260 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:47.260843 | orchestrator | 19:16:47.260 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:47.260879 | orchestrator | 19:16:47.260 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.260921 | orchestrator | 19:16:47.260 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:47.260958 | orchestrator | 19:16:47.260 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.260998 | orchestrator | 19:16:47.260 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:47.261020 | orchestrator | 19:16:47.261 STDOUT terraform:  } 2025-06-02 19:16:47.261076 | orchestrator | 19:16:47.261 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-02 19:16:47.261156 | orchestrator | 19:16:47.261 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:47.261200 | orchestrator | 19:16:47.261 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:47.261254 | orchestrator | 19:16:47.261 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.261303 | orchestrator | 19:16:47.261 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:47.261364 | orchestrator | 19:16:47.261 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.261448 | orchestrator | 19:16:47.261 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:47.261489 | orchestrator | 19:16:47.261 STDOUT terraform:  } 2025-06-02 19:16:47.261575 | orchestrator | 19:16:47.261 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-02 19:16:47.261633 | orchestrator | 19:16:47.261 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:47.261701 | orchestrator | 19:16:47.261 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:47.261752 | orchestrator | 19:16:47.261 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.261792 | orchestrator | 19:16:47.261 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:47.261828 | orchestrator | 19:16:47.261 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.261865 | orchestrator | 19:16:47.261 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:47.261890 | orchestrator | 19:16:47.261 STDOUT terraform:  } 2025-06-02 19:16:47.261952 | orchestrator | 19:16:47.261 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-02 19:16:47.262008 | orchestrator | 19:16:47.261 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:47.262063 | orchestrator | 19:16:47.262 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:47.262101 | orchestrator | 19:16:47.262 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.262139 | orchestrator | 19:16:47.262 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:47.262175 | orchestrator | 19:16:47.262 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.262211 | orchestrator | 19:16:47.262 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:47.262233 | orchestrator | 19:16:47.262 STDOUT terraform:  } 2025-06-02 19:16:47.262427 | orchestrator | 19:16:47.262 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-02 19:16:47.262523 | orchestrator | 19:16:47.262 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 19:16:47.262562 | orchestrator | 19:16:47.262 STDOUT terraform:  + device = (known after apply) 2025-06-02 19:16:47.262598 | orchestrator | 19:16:47.262 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.262633 | orchestrator | 19:16:47.262 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 19:16:47.262670 | orchestrator | 19:16:47.262 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.262711 | orchestrator | 19:16:47.262 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 19:16:47.262733 | orchestrator | 19:16:47.262 STDOUT terraform:  } 2025-06-02 19:16:47.263072 | orchestrator | 19:16:47.262 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-02 19:16:47.263173 | orchestrator | 19:16:47.263 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-02 19:16:47.263231 | orchestrator | 19:16:47.263 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 19:16:47.263293 | orchestrator | 19:16:47.263 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-02 19:16:47.263378 | orchestrator | 19:16:47.263 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.263428 | orchestrator | 19:16:47.263 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 19:16:47.263490 | orchestrator | 19:16:47.263 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.263518 | orchestrator | 19:16:47.263 STDOUT terraform:  } 2025-06-02 19:16:47.263576 | orchestrator | 19:16:47.263 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-02 19:16:47.263632 | orchestrator | 19:16:47.263 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-02 19:16:47.263665 | orchestrator | 19:16:47.263 STDOUT terraform:  + address = (known after apply) 2025-06-02 19:16:47.263702 | orchestrator | 19:16:47.263 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.263735 | orchestrator | 19:16:47.263 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 19:16:47.263805 | orchestrator | 19:16:47.263 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:47.263886 | orchestrator | 19:16:47.263 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 19:16:47.263921 | orchestrator | 19:16:47.263 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.263968 | orchestrator | 19:16:47.263 STDOUT terraform:  + pool = "public" 2025-06-02 19:16:47.264003 | orchestrator | 19:16:47.263 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 19:16:47.264037 | orchestrator | 19:16:47.264 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.264078 | orchestrator | 19:16:47.264 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:47.264462 | orchestrator | 19:16:47.264 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.264495 | orchestrator | 19:16:47.264 STDOUT terraform:  } 2025-06-02 19:16:47.264558 | orchestrator | 19:16:47.264 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-02 19:16:47.264619 | orchestrator | 19:16:47.264 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-02 19:16:47.264666 | orchestrator | 19:16:47.264 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:47.264712 | orchestrator | 19:16:47.264 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.264761 | orchestrator | 19:16:47.264 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 19:16:47.264787 | orchestrator | 19:16:47.264 STDOUT terraform:  + "nova", 2025-06-02 19:16:47.264811 | orchestrator | 19:16:47.264 STDOUT terraform:  ] 2025-06-02 19:16:47.264882 | orchestrator | 19:16:47.264 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 19:16:47.264962 | orchestrator | 19:16:47.264 STDOUT terraform:  + external = (known after apply) 2025-06-02 19:16:47.265053 | orchestrator | 19:16:47.264 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.265120 | orchestrator | 19:16:47.265 STDOUT terraform:  + mtu = (known after apply) 2025-06-02 19:16:47.265170 | orchestrator | 19:16:47.265 STDOUT terraform:  + name = "net-testbed-management" 2025-06-02 19:16:47.265214 | orchestrator | 19:16:47.265 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:47.265260 | orchestrator | 19:16:47.265 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:47.265304 | orchestrator | 19:16:47.265 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.265362 | orchestrator | 19:16:47.265 STDOUT terraform:  + shared = (known after apply) 2025-06-02 19:16:47.265410 | orchestrator | 19:16:47.265 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.265456 | orchestrator | 19:16:47.265 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-02 19:16:47.265496 | orchestrator | 19:16:47.265 STDOUT terraform:  + segments (known after apply) 2025-06-02 19:16:47.265520 | orchestrator | 19:16:47.265 STDOUT terraform:  } 2025-06-02 19:16:47.265577 | orchestrator | 19:16:47.265 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-02 19:16:47.265637 | orchestrator | 19:16:47.265 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-02 19:16:47.265735 | orchestrator | 19:16:47.265 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:47.266185 | orchestrator | 19:16:47.265 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:47.266261 | orchestrator | 19:16:47.266 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:47.266312 | orchestrator | 19:16:47.266 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.266401 | orchestrator | 19:16:47.266 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:47.266495 | orchestrator | 19:16:47.266 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:47.266584 | orchestrator | 19:16:47.266 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:47.266652 | orchestrator | 19:16:47.266 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:47.266713 | orchestrator | 19:16:47.266 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.266774 | orchestrator | 19:16:47.266 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:47.266820 | orchestrator | 19:16:47.266 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:47.266871 | orchestrator | 19:16:47.266 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:47.266916 | orchestrator | 19:16:47.266 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:47.266962 | orchestrator | 19:16:47.266 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.267007 | orchestrator | 19:16:47.266 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:47.267049 | orchestrator | 19:16:47.267 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.267082 | orchestrator | 19:16:47.267 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.267119 | orchestrator | 19:16:47.267 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:47.267141 | orchestrator | 19:16:47.267 STDOUT terraform:  } 2025-06-02 19:16:47.267168 | orchestrator | 19:16:47.267 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.267206 | orchestrator | 19:16:47.267 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:47.267228 | orchestrator | 19:16:47.267 STDOUT terraform:  } 2025-06-02 19:16:47.267301 | orchestrator | 19:16:47.267 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:47.267325 | orchestrator | 19:16:47.267 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:47.267548 | orchestrator | 19:16:47.267 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-02 19:16:47.267599 | orchestrator | 19:16:47.267 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:47.267643 | orchestrator | 19:16:47.267 STDOUT terraform:  } 2025-06-02 19:16:47.267671 | orchestrator | 19:16:47.267 STDOUT terraform:  } 2025-06-02 19:16:47.267804 | orchestrator | 19:16:47.267 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-02 19:16:47.267878 | orchestrator | 19:16:47.267 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:47.267924 | orchestrator | 19:16:47.267 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:47.267968 | orchestrator | 19:16:47.267 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:47.268012 | orchestrator | 19:16:47.267 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:47.268058 | orchestrator | 19:16:47.268 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.268100 | orchestrator | 19:16:47.268 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:47.268145 | orchestrator | 19:16:47.268 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:47.268186 | orchestrator | 19:16:47.268 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:47.268267 | orchestrator | 19:16:47.268 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:47.268341 | orchestrator | 19:16:47.268 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.268388 | orchestrator | 19:16:47.268 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:47.268450 | orchestrator | 19:16:47.268 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:47.268518 | orchestrator | 19:16:47.268 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:47.268581 | orchestrator | 19:16:47.268 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:47.268629 | orchestrator | 19:16:47.268 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.268688 | orchestrator | 19:16:47.268 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:47.268734 | orchestrator | 19:16:47.268 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.268776 | orchestrator | 19:16:47.268 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.268813 | orchestrator | 19:16:47.268 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:47.268836 | orchestrator | 19:16:47.268 STDOUT terraform:  } 2025-06-02 19:16:47.268864 | orchestrator | 19:16:47.268 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.268902 | orchestrator | 19:16:47.268 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:47.268924 | orchestrator | 19:16:47.268 STDOUT terraform:  } 2025-06-02 19:16:47.268955 | orchestrator | 19:16:47.268 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.268991 | orchestrator | 19:16:47.268 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:47.269013 | orchestrator | 19:16:47.268 STDOUT terraform:  } 2025-06-02 19:16:47.269040 | orchestrator | 19:16:47.269 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.269078 | orchestrator | 19:16:47.269 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:47.269101 | orchestrator | 19:16:47.269 STDOUT terraform:  } 2025-06-02 19:16:47.269133 | orchestrator | 19:16:47.269 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:47.269157 | orchestrator | 19:16:47.269 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:47.269191 | orchestrator | 19:16:47.269 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-02 19:16:47.269231 | orchestrator | 19:16:47.269 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:47.269253 | orchestrator | 19:16:47.269 STDOUT terraform:  } 2025-06-02 19:16:47.269276 | orchestrator | 19:16:47.269 STDOUT terraform:  } 2025-06-02 19:16:47.269330 | orchestrator | 19:16:47.269 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-02 19:16:47.269383 | orchestrator | 19:16:47.269 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:47.269431 | orchestrator | 19:16:47.269 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:47.269494 | orchestrator | 19:16:47.269 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:47.269561 | orchestrator | 19:16:47.269 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:47.269635 | orchestrator | 19:16:47.269 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.269726 | orchestrator | 19:16:47.269 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:47.269821 | orchestrator | 19:16:47.269 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:47.269918 | orchestrator | 19:16:47.269 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:47.270268 | orchestrator | 19:16:47.270 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:47.270328 | orchestrator | 19:16:47.270 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.270374 | orchestrator | 19:16:47.270 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:47.270418 | orchestrator | 19:16:47.270 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:47.270464 | orchestrator | 19:16:47.270 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:47.270508 | orchestrator | 19:16:47.270 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:47.270552 | orchestrator | 19:16:47.270 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.270598 | orchestrator | 19:16:47.270 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:47.270642 | orchestrator | 19:16:47.270 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.270674 | orchestrator | 19:16:47.270 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.270717 | orchestrator | 19:16:47.270 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:47.270754 | orchestrator | 19:16:47.270 STDOUT terraform:  } 2025-06-02 19:16:47.270783 | orchestrator | 19:16:47.270 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.270821 | orchestrator | 19:16:47.270 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:47.270844 | orchestrator | 19:16:47.270 STDOUT terraform:  } 2025-06-02 19:16:47.270874 | orchestrator | 19:16:47.270 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.270911 | orchestrator | 19:16:47.270 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:47.270934 | orchestrator | 19:16:47.270 STDOUT terraform:  } 2025-06-02 19:16:47.270965 | orchestrator | 19:16:47.270 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.271001 | orchestrator | 19:16:47.270 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:47.271024 | orchestrator | 19:16:47.271 STDOUT terraform:  } 2025-06-02 19:16:47.271061 | orchestrator | 19:16:47.271 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:47.271086 | orchestrator | 19:16:47.271 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:47.271118 | orchestrator | 19:16:47.271 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-02 19:16:47.271157 | orchestrator | 19:16:47.271 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:47.271180 | orchestrator | 19:16:47.271 STDOUT terraform:  } 2025-06-02 19:16:47.271246 | orchestrator | 19:16:47.271 STDOUT terraform:  } 2025-06-02 19:16:47.271302 | orchestrator | 19:16:47.271 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-02 19:16:47.271363 | orchestrator | 19:16:47.271 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:47.271410 | orchestrator | 19:16:47.271 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:47.271455 | orchestrator | 19:16:47.271 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:47.271500 | orchestrator | 19:16:47.271 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:47.271548 | orchestrator | 19:16:47.271 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.271593 | orchestrator | 19:16:47.271 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:47.271637 | orchestrator | 19:16:47.271 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:47.271681 | orchestrator | 19:16:47.271 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:47.271726 | orchestrator | 19:16:47.271 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:47.271800 | orchestrator | 19:16:47.271 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.271846 | orchestrator | 19:16:47.271 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:47.271891 | orchestrator | 19:16:47.271 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:47.271937 | orchestrator | 19:16:47.271 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:47.271981 | orchestrator | 19:16:47.271 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:47.272046 | orchestrator | 19:16:47.271 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.272451 | orchestrator | 19:16:47.272 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:47.272514 | orchestrator | 19:16:47.272 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.272546 | orchestrator | 19:16:47.272 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.272584 | orchestrator | 19:16:47.272 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:47.272607 | orchestrator | 19:16:47.272 STDOUT terraform:  } 2025-06-02 19:16:47.272635 | orchestrator | 19:16:47.272 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.272675 | orchestrator | 19:16:47.272 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:47.272696 | orchestrator | 19:16:47.272 STDOUT terraform:  } 2025-06-02 19:16:47.272776 | orchestrator | 19:16:47.272 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.272823 | orchestrator | 19:16:47.272 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:47.272846 | orchestrator | 19:16:47.272 STDOUT terraform:  } 2025-06-02 19:16:47.272874 | orchestrator | 19:16:47.272 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.272911 | orchestrator | 19:16:47.272 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:47.272939 | orchestrator | 19:16:47.272 STDOUT terraform:  } 2025-06-02 19:16:47.272970 | orchestrator | 19:16:47.272 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:47.272992 | orchestrator | 19:16:47.272 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:47.273026 | orchestrator | 19:16:47.273 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-02 19:16:47.273064 | orchestrator | 19:16:47.273 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:47.273090 | orchestrator | 19:16:47.273 STDOUT terraform:  } 2025-06-02 19:16:47.273112 | orchestrator | 19:16:47.273 STDOUT terraform:  } 2025-06-02 19:16:47.273167 | orchestrator | 19:16:47.273 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-02 19:16:47.273219 | orchestrator | 19:16:47.273 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:47.273266 | orchestrator | 19:16:47.273 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:47.273310 | orchestrator | 19:16:47.273 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:47.273352 | orchestrator | 19:16:47.273 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:47.273395 | orchestrator | 19:16:47.273 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.273438 | orchestrator | 19:16:47.273 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:47.273481 | orchestrator | 19:16:47.273 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:47.273545 | orchestrator | 19:16:47.273 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:47.273618 | orchestrator | 19:16:47.273 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:47.273681 | orchestrator | 19:16:47.273 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.273726 | orchestrator | 19:16:47.273 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:47.273953 | orchestrator | 19:16:47.273 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:47.274042 | orchestrator | 19:16:47.273 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:47.274115 | orchestrator | 19:16:47.274 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:47.274171 | orchestrator | 19:16:47.274 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.274233 | orchestrator | 19:16:47.274 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:47.274283 | orchestrator | 19:16:47.274 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.274312 | orchestrator | 19:16:47.274 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.274350 | orchestrator | 19:16:47.274 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:47.274373 | orchestrator | 19:16:47.274 STDOUT terraform:  } 2025-06-02 19:16:47.274401 | orchestrator | 19:16:47.274 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.274438 | orchestrator | 19:16:47.274 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:47.274491 | orchestrator | 19:16:47.274 STDOUT terraform:  } 2025-06-02 19:16:47.274520 | orchestrator | 19:16:47.274 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.274556 | orchestrator | 19:16:47.274 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:47.274598 | orchestrator | 19:16:47.274 STDOUT terraform:  } 2025-06-02 19:16:47.274691 | orchestrator | 19:16:47.274 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.274985 | orchestrator | 19:16:47.274 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:47.275014 | orchestrator | 19:16:47.274 STDOUT terraform:  } 2025-06-02 19:16:47.275048 | orchestrator | 19:16:47.275 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:47.275073 | orchestrator | 19:16:47.275 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:47.275109 | orchestrator | 19:16:47.275 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-02 19:16:47.275185 | orchestrator | 19:16:47.275 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:47.275223 | orchestrator | 19:16:47.275 STDOUT terraform:  } 2025-06-02 19:16:47.275282 | orchestrator | 19:16:47.275 STDOUT terraform:  } 2025-06-02 19:16:47.275378 | orchestrator | 19:16:47.275 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-02 19:16:47.275454 | orchestrator | 19:16:47.275 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:47.275500 | orchestrator | 19:16:47.275 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:47.275545 | orchestrator | 19:16:47.275 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:47.275588 | orchestrator | 19:16:47.275 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:47.275726 | orchestrator | 19:16:47.275 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.276156 | orchestrator | 19:16:47.275 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:47.276217 | orchestrator | 19:16:47.276 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:47.276264 | orchestrator | 19:16:47.276 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:47.276310 | orchestrator | 19:16:47.276 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:47.276356 | orchestrator | 19:16:47.276 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.276399 | orchestrator | 19:16:47.276 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:47.276442 | orchestrator | 19:16:47.276 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:47.276484 | orchestrator | 19:16:47.276 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:47.276527 | orchestrator | 19:16:47.276 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:47.276592 | orchestrator | 19:16:47.276 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.276658 | orchestrator | 19:16:47.276 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:47.276786 | orchestrator | 19:16:47.276 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.276893 | orchestrator | 19:16:47.276 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.276977 | orchestrator | 19:16:47.276 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:47.277014 | orchestrator | 19:16:47.276 STDOUT terraform:  } 2025-06-02 19:16:47.277055 | orchestrator | 19:16:47.277 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.277108 | orchestrator | 19:16:47.277 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:47.277141 | orchestrator | 19:16:47.277 STDOUT terraform:  } 2025-06-02 19:16:47.277184 | orchestrator | 19:16:47.277 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.277239 | orchestrator | 19:16:47.277 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:47.277274 | orchestrator | 19:16:47.277 STDOUT terraform:  } 2025-06-02 19:16:47.277319 | orchestrator | 19:16:47.277 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.277610 | orchestrator | 19:16:47.277 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:47.277684 | orchestrator | 19:16:47.277 STDOUT terraform:  } 2025-06-02 19:16:47.277785 | orchestrator | 19:16:47.277 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:47.277821 | orchestrator | 19:16:47.277 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:47.277874 | orchestrator | 19:16:47.277 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-02 19:16:47.277915 | orchestrator | 19:16:47.277 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:47.277940 | orchestrator | 19:16:47.277 STDOUT terraform:  } 2025-06-02 19:16:47.277963 | orchestrator | 19:16:47.277 STDOUT terraform:  } 2025-06-02 19:16:47.278032 | orchestrator | 19:16:47.277 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-02 19:16:47.278092 | orchestrator | 19:16:47.278 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 19:16:47.278136 | orchestrator | 19:16:47.278 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:47.278185 | orchestrator | 19:16:47.278 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 19:16:47.278228 | orchestrator | 19:16:47.278 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 19:16:47.278272 | orchestrator | 19:16:47.278 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.278393 | orchestrator | 19:16:47.278 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 19:16:47.278637 | orchestrator | 19:16:47.278 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 19:16:47.278689 | orchestrator | 19:16:47.278 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 19:16:47.278736 | orchestrator | 19:16:47.278 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 19:16:47.278831 | orchestrator | 19:16:47.278 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.278876 | orchestrator | 19:16:47.278 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 19:16:47.278929 | orchestrator | 19:16:47.278 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:47.278972 | orchestrator | 19:16:47.278 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 19:16:47.279016 | orchestrator | 19:16:47.278 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 19:16:47.279079 | orchestrator | 19:16:47.279 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.279352 | orchestrator | 19:16:47.279 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 19:16:47.279420 | orchestrator | 19:16:47.279 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.279458 | orchestrator | 19:16:47.279 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.279500 | orchestrator | 19:16:47.279 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 19:16:47.279525 | orchestrator | 19:16:47.279 STDOUT terraform:  } 2025-06-02 19:16:47.279558 | orchestrator | 19:16:47.279 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.279597 | orchestrator | 19:16:47.279 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 19:16:47.279621 | orchestrator | 19:16:47.279 STDOUT terraform:  } 2025-06-02 19:16:47.279653 | orchestrator | 19:16:47.279 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.279689 | orchestrator | 19:16:47.279 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 19:16:47.279712 | orchestrator | 19:16:47.279 STDOUT terraform:  } 2025-06-02 19:16:47.279756 | orchestrator | 19:16:47.279 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 19:16:47.279794 | orchestrator | 19:16:47.279 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 19:16:47.279816 | orchestrator | 19:16:47.279 STDOUT terraform:  } 2025-06-02 19:16:47.279848 | orchestrator | 19:16:47.279 STDOUT terraform:  + binding (known after apply) 2025-06-02 19:16:47.279872 | orchestrator | 19:16:47.279 STDOUT terraform:  + fixed_ip { 2025-06-02 19:16:47.279905 | orchestrator | 19:16:47.279 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-02 19:16:47.279943 | orchestrator | 19:16:47.279 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:47.279965 | orchestrator | 19:16:47.279 STDOUT terraform:  } 2025-06-02 19:16:47.279987 | orchestrator | 19:16:47.279 STDOUT terraform:  } 2025-06-02 19:16:47.280044 | orchestrator | 19:16:47.279 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-02 19:16:47.280100 | orchestrator | 19:16:47.280 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-02 19:16:47.280129 | orchestrator | 19:16:47.280 STDOUT terraform:  + force_destroy = false 2025-06-02 19:16:47.280168 | orchestrator | 19:16:47.280 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.280204 | orchestrator | 19:16:47.280 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 19:16:47.280241 | orchestrator | 19:16:47.280 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.280277 | orchestrator | 19:16:47.280 STDOUT terraform:  + router_id = (known after apply) 2025-06-02 19:16:47.280336 | orchestrator | 19:16:47.280 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 19:16:47.280367 | orchestrator | 19:16:47.280 STDOUT terraform:  } 2025-06-02 19:16:47.280449 | orchestrator | 19:16:47.280 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-02 19:16:47.280497 | orchestrator | 19:16:47.280 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-02 19:16:47.280593 | orchestrator | 19:16:47.280 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 19:16:47.280667 | orchestrator | 19:16:47.280 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.280700 | orchestrator | 19:16:47.280 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 19:16:47.280724 | orchestrator | 19:16:47.280 STDOUT terraform:  + "nova", 2025-06-02 19:16:47.280759 | orchestrator | 19:16:47.280 STDOUT terraform:  ] 2025-06-02 19:16:47.280810 | orchestrator | 19:16:47.280 STDOUT terraform:  + distributed = (known after apply) 2025-06-02 19:16:47.280916 | orchestrator | 19:16:47.280 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-02 19:16:47.281233 | orchestrator | 19:16:47.280 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-02 19:16:47.281316 | orchestrator | 19:16:47.281 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.281370 | orchestrator | 19:16:47.281 STDOUT terraform:  + name = "testbed" 2025-06-02 19:16:47.281415 | orchestrator | 19:16:47.281 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.281460 | orchestrator | 19:16:47.281 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.281514 | orchestrator | 19:16:47.281 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-02 19:16:47.281535 | orchestrator | 19:16:47.281 STDOUT terraform:  } 2025-06-02 19:16:47.281597 | orchestrator | 19:16:47.281 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-02 19:16:47.281659 | orchestrator | 19:16:47.281 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-02 19:16:47.281686 | orchestrator | 19:16:47.281 STDOUT terraform:  + description = "ssh" 2025-06-02 19:16:47.281719 | orchestrator | 19:16:47.281 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:47.281761 | orchestrator | 19:16:47.281 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:47.281829 | orchestrator | 19:16:47.281 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.281859 | orchestrator | 19:16:47.281 STDOUT terraform:  + port_range_max = 22 2025-06-02 19:16:47.281888 | orchestrator | 19:16:47.281 STDOUT terraform:  + port_range_min = 22 2025-06-02 19:16:47.281917 | orchestrator | 19:16:47.281 STDOUT terraform:  + protocol = "tcp" 2025-06-02 19:16:47.281955 | orchestrator | 19:16:47.281 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.281991 | orchestrator | 19:16:47.281 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:47.282043 | orchestrator | 19:16:47.281 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:47.282093 | orchestrator | 19:16:47.282 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:47.282132 | orchestrator | 19:16:47.282 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.282153 | orchestrator | 19:16:47.282 STDOUT terraform:  } 2025-06-02 19:16:47.282215 | orchestrator | 19:16:47.282 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-02 19:16:47.282280 | orchestrator | 19:16:47.282 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-02 19:16:47.282315 | orchestrator | 19:16:47.282 STDOUT terraform:  + description = "wireguard" 2025-06-02 19:16:47.282347 | orchestrator | 19:16:47.282 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:47.282377 | orchestrator | 19:16:47.282 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:47.282419 | orchestrator | 19:16:47.282 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.282449 | orchestrator | 19:16:47.282 STDOUT terraform:  + port_range_max = 51820 2025-06-02 19:16:47.282477 | orchestrator | 19:16:47.282 STDOUT terraform:  + port_range_min = 51820 2025-06-02 19:16:47.282505 | orchestrator | 19:16:47.282 STDOUT terraform:  + protocol = "udp" 2025-06-02 19:16:47.282545 | orchestrator | 19:16:47.282 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.282582 | orchestrator | 19:16:47.282 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:47.282615 | orchestrator | 19:16:47.282 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:47.282653 | orchestrator | 19:16:47.282 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:47.282691 | orchestrator | 19:16:47.282 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.282718 | orchestrator | 19:16:47.282 STDOUT terraform:  } 2025-06-02 19:16:47.282809 | orchestrator | 19:16:47.282 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-02 19:16:47.282895 | orchestrator | 19:16:47.282 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-02 19:16:47.282949 | orchestrator | 19:16:47.282 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:47.282980 | orchestrator | 19:16:47.282 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:47.283041 | orchestrator | 19:16:47.282 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.283103 | orchestrator | 19:16:47.283 STDOUT terraform:  + protocol = "tcp" 2025-06-02 19:16:47.283150 | orchestrator | 19:16:47.283 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.283204 | orchestrator | 19:16:47.283 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:47.283242 | orchestrator | 19:16:47.283 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 19:16:47.283280 | orchestrator | 19:16:47.283 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:47.283318 | orchestrator | 19:16:47.283 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.283347 | orchestrator | 19:16:47.283 STDOUT terraform:  } 2025-06-02 19:16:47.283408 | orchestrator | 19:16:47.283 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-02 19:16:47.283470 | orchestrator | 19:16:47.283 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-02 19:16:47.283502 | orchestrator | 19:16:47.283 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:47.283532 | orchestrator | 19:16:47.283 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:47.283571 | orchestrator | 19:16:47.283 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.283600 | orchestrator | 19:16:47.283 STDOUT terraform:  + protocol = "udp" 2025-06-02 19:16:47.283639 | orchestrator | 19:16:47.283 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.283679 | orchestrator | 19:16:47.283 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:47.283716 | orchestrator | 19:16:47.283 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 19:16:47.283769 | orchestrator | 19:16:47.283 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:47.283808 | orchestrator | 19:16:47.283 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.283830 | orchestrator | 19:16:47.283 STDOUT terraform:  } 2025-06-02 19:16:47.283890 | orchestrator | 19:16:47.283 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-02 19:16:47.283952 | orchestrator | 19:16:47.283 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-02 19:16:47.283983 | orchestrator | 19:16:47.283 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:47.284013 | orchestrator | 19:16:47.283 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:47.284051 | orchestrator | 19:16:47.284 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.284079 | orchestrator | 19:16:47.284 STDOUT terraform:  + protocol = "icmp" 2025-06-02 19:16:47.284118 | orchestrator | 19:16:47.284 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.284155 | orchestrator | 19:16:47.284 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:47.284187 | orchestrator | 19:16:47.284 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:47.284224 | orchestrator | 19:16:47.284 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:47.284264 | orchestrator | 19:16:47.284 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.284285 | orchestrator | 19:16:47.284 STDOUT terraform:  } 2025-06-02 19:16:47.284342 | orchestrator | 19:16:47.284 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-02 19:16:47.284401 | orchestrator | 19:16:47.284 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-02 19:16:47.284434 | orchestrator | 19:16:47.284 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:47.284462 | orchestrator | 19:16:47.284 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:47.284506 | orchestrator | 19:16:47.284 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.284538 | orchestrator | 19:16:47.284 STDOUT terraform:  + protocol = "tcp" 2025-06-02 19:16:47.284578 | orchestrator | 19:16:47.284 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.284632 | orchestrator | 19:16:47.284 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:47.284669 | orchestrator | 19:16:47.284 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:47.284736 | orchestrator | 19:16:47.284 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:47.284790 | orchestrator | 19:16:47.284 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.284813 | orchestrator | 19:16:47.284 STDOUT terraform:  } 2025-06-02 19:16:47.284897 | orchestrator | 19:16:47.284 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-02 19:16:47.284984 | orchestrator | 19:16:47.284 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-02 19:16:47.285046 | orchestrator | 19:16:47.284 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:47.285075 | orchestrator | 19:16:47.285 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:47.285113 | orchestrator | 19:16:47.285 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.285142 | orchestrator | 19:16:47.285 STDOUT terraform:  + protocol = "udp" 2025-06-02 19:16:47.285182 | orchestrator | 19:16:47.285 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.285218 | orchestrator | 19:16:47.285 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:47.285250 | orchestrator | 19:16:47.285 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:47.285312 | orchestrator | 19:16:47.285 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:47.285366 | orchestrator | 19:16:47.285 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.285404 | orchestrator | 19:16:47.285 STDOUT terraform:  } 2025-06-02 19:16:47.285500 | orchestrator | 19:16:47.285 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-02 19:16:47.285578 | orchestrator | 19:16:47.285 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-02 19:16:47.285610 | orchestrator | 19:16:47.285 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:47.285638 | orchestrator | 19:16:47.285 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:47.285675 | orchestrator | 19:16:47.285 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.285704 | orchestrator | 19:16:47.285 STDOUT terraform:  + protocol = "icmp" 2025-06-02 19:16:47.285769 | orchestrator | 19:16:47.285 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.285833 | orchestrator | 19:16:47.285 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:47.285872 | orchestrator | 19:16:47.285 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:47.285929 | orchestrator | 19:16:47.285 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:47.285992 | orchestrator | 19:16:47.285 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.286045 | orchestrator | 19:16:47.286 STDOUT terraform:  } 2025-06-02 19:16:47.286126 | orchestrator | 19:16:47.286 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-02 19:16:47.286186 | orchestrator | 19:16:47.286 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-02 19:16:47.286217 | orchestrator | 19:16:47.286 STDOUT terraform:  + description = "vrrp" 2025-06-02 19:16:47.286251 | orchestrator | 19:16:47.286 STDOUT terraform:  + direction = "ingress" 2025-06-02 19:16:47.286280 | orchestrator | 19:16:47.286 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 19:16:47.286321 | orchestrator | 19:16:47.286 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.286353 | orchestrator | 19:16:47.286 STDOUT terraform:  + protocol = "112" 2025-06-02 19:16:47.286392 | orchestrator | 19:16:47.286 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.286430 | orchestrator | 19:16:47.286 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 19:16:47.286464 | orchestrator | 19:16:47.286 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 19:16:47.286502 | orchestrator | 19:16:47.286 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 19:16:47.286541 | orchestrator | 19:16:47.286 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.286563 | orchestrator | 19:16:47.286 STDOUT terraform:  } 2025-06-02 19:16:47.286646 | orchestrator | 19:16:47.286 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-02 19:16:47.286757 | orchestrator | 19:16:47.286 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-02 19:16:47.286825 | orchestrator | 19:16:47.286 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.286891 | orchestrator | 19:16:47.286 STDOUT terraform:  + description = "management security group" 2025-06-02 19:16:47.286930 | orchestrator | 19:16:47.286 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.286986 | orchestrator | 19:16:47.286 STDOUT terraform:  + name = "testbed-management" 2025-06-02 19:16:47.287024 | orchestrator | 19:16:47.286 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.287060 | orchestrator | 19:16:47.287 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 19:16:47.287097 | orchestrator | 19:16:47.287 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.287120 | orchestrator | 19:16:47.287 STDOUT terraform:  } 2025-06-02 19:16:47.287176 | orchestrator | 19:16:47.287 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-02 19:16:47.287230 | orchestrator | 19:16:47.287 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-02 19:16:47.287268 | orchestrator | 19:16:47.287 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.287305 | orchestrator | 19:16:47.287 STDOUT terraform:  + description = "node security group" 2025-06-02 19:16:47.287347 | orchestrator | 19:16:47.287 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.287379 | orchestrator | 19:16:47.287 STDOUT terraform:  + name = "testbed-node" 2025-06-02 19:16:47.287415 | orchestrator | 19:16:47.287 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.287453 | orchestrator | 19:16:47.287 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 19:16:47.287493 | orchestrator | 19:16:47.287 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.287540 | orchestrator | 19:16:47.287 STDOUT terraform:  } 2025-06-02 19:16:47.287607 | orchestrator | 19:16:47.287 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-02 19:16:47.287683 | orchestrator | 19:16:47.287 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-02 19:16:47.287737 | orchestrator | 19:16:47.287 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 19:16:47.287813 | orchestrator | 19:16:47.287 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-02 19:16:47.287844 | orchestrator | 19:16:47.287 STDOUT terraform:  + dns_nameservers = [ 2025-06-02 19:16:47.287893 | orchestrator | 19:16:47.287 STDOUT terraform:  + "8.8.8.8", 2025-06-02 19:16:47.287920 | orchestrator | 19:16:47.287 STDOUT terraform:  + "9.9.9.9", 2025-06-02 19:16:47.287969 | orchestrator | 19:16:47.287 STDOUT terraform:  ] 2025-06-02 19:16:47.288005 | orchestrator | 19:16:47.287 STDOUT terraform:  + enable_dhcp = true 2025-06-02 19:16:47.288043 | orchestrator | 19:16:47.288 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-02 19:16:47.288080 | orchestrator | 19:16:47.288 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.288108 | orchestrator | 19:16:47.288 STDOUT terraform:  + ip_version = 4 2025-06-02 19:16:47.288145 | orchestrator | 19:16:47.288 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-02 19:16:47.288183 | orchestrator | 19:16:47.288 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-02 19:16:47.288227 | orchestrator | 19:16:47.288 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-02 19:16:47.288291 | orchestrator | 19:16:47.288 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 19:16:47.288335 | orchestrator | 19:16:47.288 STDOUT terraform:  + no_gateway = false 2025-06-02 19:16:47.288393 | orchestrator | 19:16:47.288 STDOUT terraform:  + region = (known after apply) 2025-06-02 19:16:47.288463 | orchestrator | 19:16:47.288 STDOUT terraform:  + service_types = (known after apply) 2025-06-02 19:16:47.288511 | orchestrator | 19:16:47.288 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 19:16:47.288568 | orchestrator | 19:16:47.288 STDOUT terraform:  + allocation_pool { 2025-06-02 19:16:47.288616 | orchestrator | 19:16:47.288 STDOUT terraform:  + end = "192.168.31.250" 2025-06-02 19:16:47.288650 | orchestrator | 19:16:47.288 STDOUT terraform:  + start = "192.168.31.200" 2025-06-02 19:16:47.288673 | orchestrator | 19:16:47.288 STDOUT terraform:  } 2025-06-02 19:16:47.288695 | orchestrator | 19:16:47.288 STDOUT terraform:  } 2025-06-02 19:16:47.288748 | orchestrator | 19:16:47.288 STDOUT terraform:  # terraform_data.image will be created 2025-06-02 19:16:47.288814 | orchestrator | 19:16:47.288 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-02 19:16:47.288850 | orchestrator | 19:16:47.288 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.288882 | orchestrator | 19:16:47.288 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 19:16:47.288914 | orchestrator | 19:16:47.288 STDOUT terraform:  + output = (known after apply) 2025-06-02 19:16:47.288937 | orchestrator | 19:16:47.288 STDOUT terraform:  } 2025-06-02 19:16:47.288974 | orchestrator | 19:16:47.288 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-02 19:16:47.289011 | orchestrator | 19:16:47.288 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-02 19:16:47.289043 | orchestrator | 19:16:47.289 STDOUT terraform:  + id = (known after apply) 2025-06-02 19:16:47.289074 | orchestrator | 19:16:47.289 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 19:16:47.289107 | orchestrator | 19:16:47.289 STDOUT terraform:  + output = (known after apply) 2025-06-02 19:16:47.289128 | orchestrator | 19:16:47.289 STDOUT terraform:  } 2025-06-02 19:16:47.289188 | orchestrator | 19:16:47.289 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-02 19:16:47.289212 | orchestrator | 19:16:47.289 STDOUT terraform: Changes to Outputs: 2025-06-02 19:16:47.289259 | orchestrator | 19:16:47.289 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-02 19:16:47.289314 | orchestrator | 19:16:47.289 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 19:16:47.363734 | orchestrator | 19:16:47.363 STDOUT terraform: terraform_data.image: Creating... 2025-06-02 19:16:47.364012 | orchestrator | 19:16:47.363 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=5e2d608f-ff6f-7be9-c8d4-24af93f2845f] 2025-06-02 19:16:47.428966 | orchestrator | 19:16:47.428 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-02 19:16:47.429048 | orchestrator | 19:16:47.428 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=4cefe577-4ce7-7610-1b10-2755350d66f8] 2025-06-02 19:16:47.457026 | orchestrator | 19:16:47.456 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-02 19:16:47.467869 | orchestrator | 19:16:47.467 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-02 19:16:47.468085 | orchestrator | 19:16:47.468 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-02 19:16:47.479447 | orchestrator | 19:16:47.479 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-02 19:16:47.479655 | orchestrator | 19:16:47.479 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-02 19:16:47.481959 | orchestrator | 19:16:47.481 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-02 19:16:47.483721 | orchestrator | 19:16:47.483 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-02 19:16:47.496422 | orchestrator | 19:16:47.496 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-02 19:16:47.496903 | orchestrator | 19:16:47.496 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-02 19:16:47.497934 | orchestrator | 19:16:47.497 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-02 19:16:47.938443 | orchestrator | 19:16:47.938 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 19:16:47.944905 | orchestrator | 19:16:47.944 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-02 19:16:47.980199 | orchestrator | 19:16:47.979 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-06-02 19:16:47.988047 | orchestrator | 19:16:47.987 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-02 19:16:48.283148 | orchestrator | 19:16:48.282 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 19:16:48.289708 | orchestrator | 19:16:48.289 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-02 19:16:53.567346 | orchestrator | 19:16:53.566 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 7s [id=53fe5577-50d1-4636-bef1-e8f5ed98a160] 2025-06-02 19:16:56.325965 | orchestrator | 19:16:53.577 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-02 19:16:57.470734 | orchestrator | 19:16:57.470 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-02 19:16:57.481711 | orchestrator | 19:16:57.481 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-02 19:16:57.484899 | orchestrator | 19:16:57.484 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-02 19:16:57.498168 | orchestrator | 19:16:57.497 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-02 19:16:57.499320 | orchestrator | 19:16:57.499 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-02 19:16:57.500412 | orchestrator | 19:16:57.500 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [11s elapsed] 2025-06-02 19:16:57.945497 | orchestrator | 19:16:57.945 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-02 19:16:57.989684 | orchestrator | 19:16:57.989 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-02 19:16:58.085991 | orchestrator | 19:16:58.084 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=31522631-626d-4eab-bbf4-d80ec429ee40] 2025-06-02 19:16:58.101734 | orchestrator | 19:16:58.100 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-02 19:16:58.108312 | orchestrator | 19:16:58.106 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b] 2025-06-02 19:16:58.109159 | orchestrator | 19:16:58.107 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=ed9edca446221a5aceb4b41cc6e6892461f56cc0] 2025-06-02 19:16:58.119371 | orchestrator | 19:16:58.119 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=17194968-3402-4871-a3b7-d8b4dd3032d8] 2025-06-02 19:16:58.122076 | orchestrator | 19:16:58.121 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-02 19:16:58.123769 | orchestrator | 19:16:58.123 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-02 19:16:58.127605 | orchestrator | 19:16:58.127 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-02 19:16:58.129037 | orchestrator | 19:16:58.128 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=84b76fbeaa1475143a2a8298a5160cfdffd9cb34] 2025-06-02 19:16:58.135214 | orchestrator | 19:16:58.135 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-02 19:16:58.135546 | orchestrator | 19:16:58.135 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=3a83bf91-153f-49f3-b384-9ce8856c05fb] 2025-06-02 19:16:58.140901 | orchestrator | 19:16:58.140 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=b537626e-57d0-4db8-bc93-475b5479d5db] 2025-06-02 19:16:58.141477 | orchestrator | 19:16:58.141 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-02 19:16:58.146338 | orchestrator | 19:16:58.146 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-02 19:16:58.163598 | orchestrator | 19:16:58.163 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=56067267-e29e-4b33-bc58-6a568e4c77ee] 2025-06-02 19:16:58.173431 | orchestrator | 19:16:58.173 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-02 19:16:58.208778 | orchestrator | 19:16:58.208 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=f90c13d8-18de-4224-a0ec-2fb9bc967aba] 2025-06-02 19:16:58.218184 | orchestrator | 19:16:58.218 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-02 19:16:58.226304 | orchestrator | 19:16:58.225 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=2edf9efd-121b-4ff6-b6f5-d420782ba04f] 2025-06-02 19:16:58.290907 | orchestrator | 19:16:58.290 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-02 19:16:58.506091 | orchestrator | 19:16:58.505 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=afb213e9-57a6-474d-a5f5-62ab693fc54b] 2025-06-02 19:17:03.578647 | orchestrator | 19:17:03.578 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 19:17:04.044146 | orchestrator | 19:17:04.043 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=a3dd6798-adb9-4543-9c08-89ba5682cac3] 2025-06-02 19:17:04.175599 | orchestrator | 19:17:04.175 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=5ac7eda7-24a4-4127-9719-5d4058fec210] 2025-06-02 19:17:04.183845 | orchestrator | 19:17:04.183 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-02 19:17:08.124512 | orchestrator | 19:17:08.124 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-02 19:17:08.128814 | orchestrator | 19:17:08.128 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-02 19:17:08.135967 | orchestrator | 19:17:08.135 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-02 19:17:08.143332 | orchestrator | 19:17:08.143 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-02 19:17:08.147608 | orchestrator | 19:17:08.147 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 19:17:08.173911 | orchestrator | 19:17:08.173 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-02 19:17:08.505813 | orchestrator | 19:17:08.505 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=08b58ced-3c5b-405c-ae09-2d18558cfc25] 2025-06-02 19:17:08.528102 | orchestrator | 19:17:08.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=3b3451ec-fae9-4227-a22d-4a5dda6aaaab] 2025-06-02 19:17:08.541279 | orchestrator | 19:17:08.540 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=453458da-4d99-4de0-a2fa-ec8f657b9d69] 2025-06-02 19:17:08.563197 | orchestrator | 19:17:08.562 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=51e325de-b67d-49ea-ab97-c3f76b8e45c7] 2025-06-02 19:17:08.618598 | orchestrator | 19:17:08.618 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=5f45be76-170b-43b0-9721-f75aad287b64] 2025-06-02 19:17:10.060028 | orchestrator | 19:17:10.059 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 12s [id=aba5dfb2-d59a-4774-ab63-5c2c16f9e35e] 2025-06-02 19:17:11.976836 | orchestrator | 19:17:11.976 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=79e415bd-487e-49a4-a9e0-0ef35c760a58] 2025-06-02 19:17:11.985861 | orchestrator | 19:17:11.985 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-02 19:17:11.986347 | orchestrator | 19:17:11.986 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-02 19:17:11.986759 | orchestrator | 19:17:11.986 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-02 19:17:12.291026 | orchestrator | 19:17:12.290 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=6965e1eb-b06a-41b0-8b6c-0f76568a76b9] 2025-06-02 19:17:12.306073 | orchestrator | 19:17:12.305 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-02 19:17:12.306172 | orchestrator | 19:17:12.305 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-02 19:17:12.306569 | orchestrator | 19:17:12.306 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-02 19:17:12.309816 | orchestrator | 19:17:12.309 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-02 19:17:12.310280 | orchestrator | 19:17:12.310 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-02 19:17:12.315204 | orchestrator | 19:17:12.315 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-02 19:17:12.367674 | orchestrator | 19:17:12.367 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=3944d8d1-b05b-4179-b1d4-a1e2e92fa411] 2025-06-02 19:17:12.373464 | orchestrator | 19:17:12.373 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-02 19:17:12.376567 | orchestrator | 19:17:12.376 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-02 19:17:12.377240 | orchestrator | 19:17:12.377 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-02 19:17:12.544202 | orchestrator | 19:17:12.543 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=bddc2377-91d9-4557-ae06-b7790cb7e369] 2025-06-02 19:17:12.549507 | orchestrator | 19:17:12.549 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=753f9483-fc40-4ac7-bda8-14f2f48149f3] 2025-06-02 19:17:12.552366 | orchestrator | 19:17:12.552 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-02 19:17:12.568519 | orchestrator | 19:17:12.568 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-02 19:17:12.689535 | orchestrator | 19:17:12.689 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=0e2664b4-4e2e-46fe-95f1-0ce2f85b0e5a] 2025-06-02 19:17:12.704048 | orchestrator | 19:17:12.703 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-02 19:17:12.893964 | orchestrator | 19:17:12.893 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=b361865f-3373-4ce3-b215-9d82085e0f43] 2025-06-02 19:17:12.914198 | orchestrator | 19:17:12.913 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-02 19:17:13.084169 | orchestrator | 19:17:13.083 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=d56ee2e4-b9a8-45ef-aba7-6beba4326d29] 2025-06-02 19:17:13.098008 | orchestrator | 19:17:13.097 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-02 19:17:13.162624 | orchestrator | 19:17:13.162 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=c77771f7-42b7-4b1a-a790-3037988208bf] 2025-06-02 19:17:13.176591 | orchestrator | 19:17:13.176 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-02 19:17:13.339880 | orchestrator | 19:17:13.337 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=04c04007-3694-4d30-a437-97ad78fa4258] 2025-06-02 19:17:13.354716 | orchestrator | 19:17:13.354 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-02 19:17:13.532337 | orchestrator | 19:17:13.531 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=0076af2a-a6f7-4469-80a4-d4faaae258f9] 2025-06-02 19:17:13.697201 | orchestrator | 19:17:13.696 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=3ca298ca-dd81-4f70-b260-c134bc7d3d87] 2025-06-02 19:17:18.008161 | orchestrator | 19:17:18.007 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=7cfa3ea2-7d79-4249-9dc3-f37a722cdc4f] 2025-06-02 19:17:18.589653 | orchestrator | 19:17:18.589 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=1f6683ce-6e17-4617-b126-7c709f9c502d] 2025-06-02 19:17:18.654269 | orchestrator | 19:17:18.653 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=c496f668-5cd2-4e9a-8aa1-6ef0cb301e96] 2025-06-02 19:17:18.669784 | orchestrator | 19:17:18.669 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=ade686cc-2dba-40ac-89eb-5fce8810767f] 2025-06-02 19:17:18.851363 | orchestrator | 19:17:18.851 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=092c28d3-fd98-47ba-8eeb-028ea29babf3] 2025-06-02 19:17:18.957854 | orchestrator | 19:17:18.957 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=c6b71504-167a-410c-9260-7d67b8f5a1f9] 2025-06-02 19:17:19.110135 | orchestrator | 19:17:19.109 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=8e35e266-5c49-45e0-a2be-249e7b388e28] 2025-06-02 19:17:20.199098 | orchestrator | 19:17:20.198 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=3b7a92f7-9450-418e-9f25-e95fe1b810b7] 2025-06-02 19:17:20.208573 | orchestrator | 19:17:20.208 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-02 19:17:20.237033 | orchestrator | 19:17:20.236 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-02 19:17:20.238423 | orchestrator | 19:17:20.238 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-02 19:17:20.241513 | orchestrator | 19:17:20.241 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-02 19:17:20.242959 | orchestrator | 19:17:20.242 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-02 19:17:20.250438 | orchestrator | 19:17:20.250 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-02 19:17:20.255127 | orchestrator | 19:17:20.254 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-02 19:17:26.624869 | orchestrator | 19:17:26.623 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=6a202ac1-1f89-42d9-9c39-d39fc53fe019] 2025-06-02 19:17:26.633217 | orchestrator | 19:17:26.633 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-02 19:17:26.635880 | orchestrator | 19:17:26.633 STDOUT terraform: local_file.inventory: Creating... 2025-06-02 19:17:26.635980 | orchestrator | 19:17:26.635 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-02 19:17:26.637918 | orchestrator | 19:17:26.637 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=8dc2944875ee2e2e189d0c5ffdb835802cfaa03f] 2025-06-02 19:17:26.639108 | orchestrator | 19:17:26.638 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=2cdc2438bd487d6c6c2aa0029af3cd1d6551dfe5] 2025-06-02 19:17:27.731401 | orchestrator | 19:17:27.730 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=6a202ac1-1f89-42d9-9c39-d39fc53fe019] 2025-06-02 19:17:30.239510 | orchestrator | 19:17:30.239 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-02 19:17:30.242542 | orchestrator | 19:17:30.242 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-02 19:17:30.245987 | orchestrator | 19:17:30.245 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-02 19:17:30.246250 | orchestrator | 19:17:30.246 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-02 19:17:30.253789 | orchestrator | 19:17:30.253 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-02 19:17:30.257267 | orchestrator | 19:17:30.257 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-02 19:17:40.239729 | orchestrator | 19:17:40.239 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-02 19:17:40.243780 | orchestrator | 19:17:40.243 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-02 19:17:40.247012 | orchestrator | 19:17:40.246 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-02 19:17:40.247125 | orchestrator | 19:17:40.246 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-02 19:17:40.254553 | orchestrator | 19:17:40.254 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-02 19:17:40.258201 | orchestrator | 19:17:40.257 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-02 19:17:40.882932 | orchestrator | 19:17:40.882 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=32db8062-4001-4452-bf97-1c2d2b809fe7] 2025-06-02 19:17:41.081226 | orchestrator | 19:17:41.080 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=bc1e4fc1-02fd-4123-a686-fcfe9516e988] 2025-06-02 19:17:50.242844 | orchestrator | 19:17:50.242 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-02 19:17:50.248341 | orchestrator | 19:17:50.248 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-02 19:17:50.248447 | orchestrator | 19:17:50.248 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-02 19:17:50.258803 | orchestrator | 19:17:50.258 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-02 19:17:50.850358 | orchestrator | 19:17:50.849 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=8b9ceda8-e32d-4a1a-ac47-9938dc3029e7] 2025-06-02 19:17:50.977432 | orchestrator | 19:17:50.977 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=3722f794-698d-44f5-8094-1c28ed4331d1] 2025-06-02 19:17:51.026001 | orchestrator | 19:17:51.025 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=7d1feb8a-c47a-422b-a560-46dd24884f0c] 2025-06-02 19:17:51.191345 | orchestrator | 19:17:51.190 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=e6b77449-8d91-4583-a2d7-a69c90140f16] 2025-06-02 19:17:51.206366 | orchestrator | 19:17:51.206 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-02 19:17:51.211243 | orchestrator | 19:17:51.211 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3651360692060088589] 2025-06-02 19:17:51.222929 | orchestrator | 19:17:51.222 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-02 19:17:51.225040 | orchestrator | 19:17:51.224 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-02 19:17:51.226664 | orchestrator | 19:17:51.226 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-02 19:17:51.243612 | orchestrator | 19:17:51.243 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-02 19:17:51.243675 | orchestrator | 19:17:51.243 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-02 19:17:51.243694 | orchestrator | 19:17:51.243 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-02 19:17:51.248390 | orchestrator | 19:17:51.248 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-02 19:17:51.250184 | orchestrator | 19:17:51.250 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-02 19:17:51.258267 | orchestrator | 19:17:51.258 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-02 19:17:51.261685 | orchestrator | 19:17:51.261 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-02 19:17:56.684984 | orchestrator | 19:17:56.684 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=bc1e4fc1-02fd-4123-a686-fcfe9516e988/3a83bf91-153f-49f3-b384-9ce8856c05fb] 2025-06-02 19:17:56.751232 | orchestrator | 19:17:56.750 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=3722f794-698d-44f5-8094-1c28ed4331d1/31522631-626d-4eab-bbf4-d80ec429ee40] 2025-06-02 19:17:56.774231 | orchestrator | 19:17:56.773 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=32db8062-4001-4452-bf97-1c2d2b809fe7/b537626e-57d0-4db8-bc93-475b5479d5db] 2025-06-02 19:17:56.807694 | orchestrator | 19:17:56.807 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=3722f794-698d-44f5-8094-1c28ed4331d1/2edf9efd-121b-4ff6-b6f5-d420782ba04f] 2025-06-02 19:17:56.810664 | orchestrator | 19:17:56.810 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=bc1e4fc1-02fd-4123-a686-fcfe9516e988/17194968-3402-4871-a3b7-d8b4dd3032d8] 2025-06-02 19:17:56.837086 | orchestrator | 19:17:56.836 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=32db8062-4001-4452-bf97-1c2d2b809fe7/afb213e9-57a6-474d-a5f5-62ab693fc54b] 2025-06-02 19:17:56.855321 | orchestrator | 19:17:56.854 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=3722f794-698d-44f5-8094-1c28ed4331d1/f90c13d8-18de-4224-a0ec-2fb9bc967aba] 2025-06-02 19:17:56.863256 | orchestrator | 19:17:56.862 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=bc1e4fc1-02fd-4123-a686-fcfe9516e988/fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b] 2025-06-02 19:17:56.884878 | orchestrator | 19:17:56.884 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=32db8062-4001-4452-bf97-1c2d2b809fe7/56067267-e29e-4b33-bc58-6a568e4c77ee] 2025-06-02 19:18:01.251013 | orchestrator | 19:18:01.250 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-02 19:18:11.252244 | orchestrator | 19:18:11.251 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-02 19:18:11.740118 | orchestrator | 19:18:11.739 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=5706d7b4-59db-4060-96b6-1ed77aa76998] 2025-06-02 19:18:11.766956 | orchestrator | 19:18:11.766 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-02 19:18:11.767132 | orchestrator | 19:18:11.766 STDOUT terraform: Outputs: 2025-06-02 19:18:11.767170 | orchestrator | 19:18:11.766 STDOUT terraform: manager_address = 2025-06-02 19:18:11.767183 | orchestrator | 19:18:11.767 STDOUT terraform: private_key = 2025-06-02 19:18:11.879389 | orchestrator | ok: Runtime: 0:01:35.666787 2025-06-02 19:18:11.913455 | 2025-06-02 19:18:11.913635 | TASK [Fetch manager address] 2025-06-02 19:18:12.378730 | orchestrator | ok 2025-06-02 19:18:12.389108 | 2025-06-02 19:18:12.389265 | TASK [Set manager_host address] 2025-06-02 19:18:12.468192 | orchestrator | ok 2025-06-02 19:18:12.479198 | 2025-06-02 19:18:12.479373 | LOOP [Update ansible collections] 2025-06-02 19:18:13.864410 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 19:18:13.864766 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 19:18:13.864822 | orchestrator | Starting galaxy collection install process 2025-06-02 19:18:13.864862 | orchestrator | Process install dependency map 2025-06-02 19:18:13.864898 | orchestrator | Starting collection install process 2025-06-02 19:18:13.864931 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-06-02 19:18:13.864968 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-06-02 19:18:13.865007 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-02 19:18:13.865136 | orchestrator | ok: Item: commons Runtime: 0:00:01.035652 2025-06-02 19:18:14.840438 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 19:18:14.840619 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 19:18:14.840700 | orchestrator | Starting galaxy collection install process 2025-06-02 19:18:14.840765 | orchestrator | Process install dependency map 2025-06-02 19:18:14.840821 | orchestrator | Starting collection install process 2025-06-02 19:18:14.840873 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-06-02 19:18:14.840909 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-06-02 19:18:14.840942 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-02 19:18:14.840993 | orchestrator | ok: Item: services Runtime: 0:00:00.621716 2025-06-02 19:18:14.862013 | 2025-06-02 19:18:14.862245 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 19:18:25.437630 | orchestrator | ok 2025-06-02 19:18:25.448955 | 2025-06-02 19:18:25.449121 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 19:19:25.490581 | orchestrator | ok 2025-06-02 19:19:25.501357 | 2025-06-02 19:19:25.501500 | TASK [Fetch manager ssh hostkey] 2025-06-02 19:19:27.075181 | orchestrator | Output suppressed because no_log was given 2025-06-02 19:19:27.092295 | 2025-06-02 19:19:27.092531 | TASK [Get ssh keypair from terraform environment] 2025-06-02 19:19:27.633764 | orchestrator | ok: Runtime: 0:00:00.012777 2025-06-02 19:19:27.649910 | 2025-06-02 19:19:27.650130 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 19:19:27.701192 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-02 19:19:27.711640 | 2025-06-02 19:19:27.711778 | TASK [Run manager part 0] 2025-06-02 19:19:29.420290 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 19:19:29.466054 | orchestrator | 2025-06-02 19:19:29.466110 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-02 19:19:29.466118 | orchestrator | 2025-06-02 19:19:29.466131 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-02 19:19:31.168064 | orchestrator | ok: [testbed-manager] 2025-06-02 19:19:31.168137 | orchestrator | 2025-06-02 19:19:31.168174 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 19:19:31.168191 | orchestrator | 2025-06-02 19:19:31.168203 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:19:33.166329 | orchestrator | ok: [testbed-manager] 2025-06-02 19:19:33.166490 | orchestrator | 2025-06-02 19:19:33.166513 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 19:19:33.858993 | orchestrator | ok: [testbed-manager] 2025-06-02 19:19:33.859079 | orchestrator | 2025-06-02 19:19:33.859096 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 19:19:33.930145 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:33.930199 | orchestrator | 2025-06-02 19:19:33.930209 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-02 19:19:33.967932 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:33.968003 | orchestrator | 2025-06-02 19:19:33.968015 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 19:19:33.994073 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:33.994142 | orchestrator | 2025-06-02 19:19:33.994153 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 19:19:34.026678 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:34.026742 | orchestrator | 2025-06-02 19:19:34.026749 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 19:19:34.059008 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:34.059080 | orchestrator | 2025-06-02 19:19:34.059092 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-02 19:19:34.094870 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:34.094963 | orchestrator | 2025-06-02 19:19:34.094982 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-02 19:19:34.125740 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:19:34.125811 | orchestrator | 2025-06-02 19:19:34.125819 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-02 19:19:34.955942 | orchestrator | changed: [testbed-manager] 2025-06-02 19:19:35.080053 | orchestrator | 2025-06-02 19:19:35.080099 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-02 19:22:50.148194 | orchestrator | changed: [testbed-manager] 2025-06-02 19:22:50.148288 | orchestrator | 2025-06-02 19:22:50.148307 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 19:24:12.999960 | orchestrator | changed: [testbed-manager] 2025-06-02 19:24:13.000069 | orchestrator | 2025-06-02 19:24:13.000090 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 19:24:43.615981 | orchestrator | changed: [testbed-manager] 2025-06-02 19:24:43.616055 | orchestrator | 2025-06-02 19:24:43.616074 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 19:24:52.193312 | orchestrator | changed: [testbed-manager] 2025-06-02 19:24:52.193359 | orchestrator | 2025-06-02 19:24:52.193369 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 19:24:52.242259 | orchestrator | ok: [testbed-manager] 2025-06-02 19:24:52.242300 | orchestrator | 2025-06-02 19:24:52.242310 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-02 19:24:53.050647 | orchestrator | ok: [testbed-manager] 2025-06-02 19:24:53.050738 | orchestrator | 2025-06-02 19:24:53.050757 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-02 19:24:53.782766 | orchestrator | changed: [testbed-manager] 2025-06-02 19:24:53.783603 | orchestrator | 2025-06-02 19:24:53.783627 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-02 19:25:00.288331 | orchestrator | changed: [testbed-manager] 2025-06-02 19:25:00.288419 | orchestrator | 2025-06-02 19:25:00.288456 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-02 19:25:06.503509 | orchestrator | changed: [testbed-manager] 2025-06-02 19:25:06.503579 | orchestrator | 2025-06-02 19:25:06.503598 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-02 19:25:09.348776 | orchestrator | changed: [testbed-manager] 2025-06-02 19:25:09.348898 | orchestrator | 2025-06-02 19:25:09.348916 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-02 19:25:11.164673 | orchestrator | changed: [testbed-manager] 2025-06-02 19:25:11.164750 | orchestrator | 2025-06-02 19:25:11.164764 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-02 19:25:12.281183 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 19:25:12.281275 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 19:25:12.281291 | orchestrator | 2025-06-02 19:25:12.281305 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-02 19:25:12.324637 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 19:25:12.324715 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 19:25:12.324729 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 19:25:12.324742 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 19:25:17.010451 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 19:25:17.010556 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 19:25:17.010573 | orchestrator | 2025-06-02 19:25:17.010586 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-02 19:25:17.581451 | orchestrator | changed: [testbed-manager] 2025-06-02 19:25:17.581538 | orchestrator | 2025-06-02 19:25:17.581556 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-02 19:26:37.424563 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-02 19:26:37.424636 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-02 19:26:37.424643 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-02 19:26:37.424648 | orchestrator | 2025-06-02 19:26:37.424653 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-02 19:26:39.770420 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-02 19:26:39.770508 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-02 19:26:39.770523 | orchestrator | 2025-06-02 19:26:39.770536 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-02 19:26:39.770548 | orchestrator | 2025-06-02 19:26:39.770560 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:26:41.168939 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:41.169017 | orchestrator | 2025-06-02 19:26:41.169034 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 19:26:41.202208 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:41.202278 | orchestrator | 2025-06-02 19:26:41.202294 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 19:26:41.261729 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:41.261863 | orchestrator | 2025-06-02 19:26:41.261891 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 19:26:42.061941 | orchestrator | changed: [testbed-manager] 2025-06-02 19:26:42.061978 | orchestrator | 2025-06-02 19:26:42.061985 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 19:26:42.792559 | orchestrator | changed: [testbed-manager] 2025-06-02 19:26:42.792712 | orchestrator | 2025-06-02 19:26:42.792733 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 19:26:44.159114 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-02 19:26:44.159153 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-02 19:26:44.159161 | orchestrator | 2025-06-02 19:26:44.159175 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 19:26:45.581963 | orchestrator | changed: [testbed-manager] 2025-06-02 19:26:45.582098 | orchestrator | 2025-06-02 19:26:45.582119 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 19:26:47.320465 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:26:47.320544 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-02 19:26:47.320556 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:26:47.320566 | orchestrator | 2025-06-02 19:26:47.320577 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 19:26:47.888861 | orchestrator | changed: [testbed-manager] 2025-06-02 19:26:47.888952 | orchestrator | 2025-06-02 19:26:47.888968 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 19:26:47.963983 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:26:47.964066 | orchestrator | 2025-06-02 19:26:47.964082 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 19:26:48.828069 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:26:48.828157 | orchestrator | changed: [testbed-manager] 2025-06-02 19:26:48.828173 | orchestrator | 2025-06-02 19:26:48.828187 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 19:26:48.865003 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:26:48.865075 | orchestrator | 2025-06-02 19:26:48.865089 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 19:26:48.899213 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:26:48.899252 | orchestrator | 2025-06-02 19:26:48.899259 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 19:26:48.927515 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:26:48.927550 | orchestrator | 2025-06-02 19:26:48.927557 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 19:26:48.980131 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:26:48.980172 | orchestrator | 2025-06-02 19:26:48.980182 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 19:26:49.667104 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:49.667139 | orchestrator | 2025-06-02 19:26:49.667145 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 19:26:49.667150 | orchestrator | 2025-06-02 19:26:49.667156 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:26:51.114571 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:51.114609 | orchestrator | 2025-06-02 19:26:51.114615 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-02 19:26:52.073449 | orchestrator | changed: [testbed-manager] 2025-06-02 19:26:52.073538 | orchestrator | 2025-06-02 19:26:52.073555 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:26:52.073569 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 19:26:52.073581 | orchestrator | 2025-06-02 19:26:52.525549 | orchestrator | ok: Runtime: 0:07:24.049866 2025-06-02 19:26:52.539739 | 2025-06-02 19:26:52.539936 | TASK [Point out that the log in on the manager is now possible] 2025-06-02 19:26:52.576368 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-02 19:26:52.588922 | 2025-06-02 19:26:52.589057 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 19:26:52.625449 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-02 19:26:52.635207 | 2025-06-02 19:26:52.635338 | TASK [Run manager part 1 + 2] 2025-06-02 19:26:53.460201 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 19:26:53.512067 | orchestrator | 2025-06-02 19:26:53.512115 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-02 19:26:53.512123 | orchestrator | 2025-06-02 19:26:53.512136 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:26:56.419322 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:56.419367 | orchestrator | 2025-06-02 19:26:56.419390 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 19:26:56.455347 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:26:56.455407 | orchestrator | 2025-06-02 19:26:56.455425 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 19:26:56.508300 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:56.508348 | orchestrator | 2025-06-02 19:26:56.508463 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 19:26:56.552166 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:56.552212 | orchestrator | 2025-06-02 19:26:56.552220 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 19:26:56.631671 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:56.631726 | orchestrator | 2025-06-02 19:26:56.631735 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 19:26:56.702203 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:56.702256 | orchestrator | 2025-06-02 19:26:56.702267 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 19:26:56.758561 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-02 19:26:56.758604 | orchestrator | 2025-06-02 19:26:56.758610 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 19:26:57.489436 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:57.489488 | orchestrator | 2025-06-02 19:26:57.489499 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 19:26:57.544987 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:26:57.545041 | orchestrator | 2025-06-02 19:26:57.545050 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 19:26:58.902549 | orchestrator | changed: [testbed-manager] 2025-06-02 19:26:58.902604 | orchestrator | 2025-06-02 19:26:58.902613 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 19:26:59.506059 | orchestrator | ok: [testbed-manager] 2025-06-02 19:26:59.506115 | orchestrator | 2025-06-02 19:26:59.506123 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 19:27:00.645607 | orchestrator | changed: [testbed-manager] 2025-06-02 19:27:00.645659 | orchestrator | 2025-06-02 19:27:00.645669 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 19:27:13.876350 | orchestrator | changed: [testbed-manager] 2025-06-02 19:27:13.876450 | orchestrator | 2025-06-02 19:27:13.876466 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 19:27:14.547445 | orchestrator | ok: [testbed-manager] 2025-06-02 19:27:14.547487 | orchestrator | 2025-06-02 19:27:14.547498 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 19:27:14.602261 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:27:14.602305 | orchestrator | 2025-06-02 19:27:14.602314 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-02 19:27:15.537740 | orchestrator | changed: [testbed-manager] 2025-06-02 19:27:15.537780 | orchestrator | 2025-06-02 19:27:15.537789 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-02 19:27:16.497598 | orchestrator | changed: [testbed-manager] 2025-06-02 19:27:16.497790 | orchestrator | 2025-06-02 19:27:16.497836 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-02 19:27:17.158526 | orchestrator | changed: [testbed-manager] 2025-06-02 19:27:17.158565 | orchestrator | 2025-06-02 19:27:17.158573 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-02 19:27:17.197760 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 19:27:17.197895 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 19:27:17.197911 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 19:27:17.197923 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 19:27:19.169692 | orchestrator | changed: [testbed-manager] 2025-06-02 19:27:19.169742 | orchestrator | 2025-06-02 19:27:19.169751 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-02 19:27:28.292287 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-02 19:27:28.292428 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-02 19:27:28.292447 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-02 19:27:28.292459 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-02 19:27:28.292479 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-02 19:27:28.292490 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-02 19:27:28.292501 | orchestrator | 2025-06-02 19:27:28.292513 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-02 19:27:29.373852 | orchestrator | changed: [testbed-manager] 2025-06-02 19:27:29.373940 | orchestrator | 2025-06-02 19:27:29.373958 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-02 19:27:29.418041 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:27:29.418126 | orchestrator | 2025-06-02 19:27:29.418142 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-02 19:27:32.496048 | orchestrator | changed: [testbed-manager] 2025-06-02 19:27:32.496139 | orchestrator | 2025-06-02 19:27:32.496156 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-02 19:27:32.534982 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:27:32.535049 | orchestrator | 2025-06-02 19:27:32.535074 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-02 19:29:09.523474 | orchestrator | changed: [testbed-manager] 2025-06-02 19:29:09.523827 | orchestrator | 2025-06-02 19:29:09.523863 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 19:29:10.659681 | orchestrator | ok: [testbed-manager] 2025-06-02 19:29:10.659720 | orchestrator | 2025-06-02 19:29:10.659729 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:29:10.659736 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-02 19:29:10.659742 | orchestrator | 2025-06-02 19:29:10.804133 | orchestrator | ok: Runtime: 0:02:17.797754 2025-06-02 19:29:10.814290 | 2025-06-02 19:29:10.814412 | TASK [Reboot manager] 2025-06-02 19:29:12.350907 | orchestrator | ok: Runtime: 0:00:00.947739 2025-06-02 19:29:12.367012 | 2025-06-02 19:29:12.367165 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 19:29:26.856489 | orchestrator | ok 2025-06-02 19:29:26.867120 | 2025-06-02 19:29:26.867265 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 19:30:26.917373 | orchestrator | ok 2025-06-02 19:30:26.927140 | 2025-06-02 19:30:26.927306 | TASK [Deploy manager + bootstrap nodes] 2025-06-02 19:30:29.493983 | orchestrator | 2025-06-02 19:30:29.494221 | orchestrator | # DEPLOY MANAGER 2025-06-02 19:30:29.494245 | orchestrator | 2025-06-02 19:30:29.494259 | orchestrator | + set -e 2025-06-02 19:30:29.494272 | orchestrator | + echo 2025-06-02 19:30:29.494286 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-02 19:30:29.494302 | orchestrator | + echo 2025-06-02 19:30:29.494351 | orchestrator | + cat /opt/manager-vars.sh 2025-06-02 19:30:29.497754 | orchestrator | export NUMBER_OF_NODES=6 2025-06-02 19:30:29.497783 | orchestrator | 2025-06-02 19:30:29.497814 | orchestrator | export CEPH_VERSION=reef 2025-06-02 19:30:29.497829 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-02 19:30:29.497844 | orchestrator | export MANAGER_VERSION=9.1.0 2025-06-02 19:30:29.497868 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-02 19:30:29.497880 | orchestrator | 2025-06-02 19:30:29.497901 | orchestrator | export ARA=false 2025-06-02 19:30:29.497914 | orchestrator | export DEPLOY_MODE=manager 2025-06-02 19:30:29.497933 | orchestrator | export TEMPEST=false 2025-06-02 19:30:29.497946 | orchestrator | export IS_ZUUL=true 2025-06-02 19:30:29.497960 | orchestrator | 2025-06-02 19:30:29.497979 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2025-06-02 19:30:29.497993 | orchestrator | export EXTERNAL_API=false 2025-06-02 19:30:29.498006 | orchestrator | 2025-06-02 19:30:29.498054 | orchestrator | export IMAGE_USER=ubuntu 2025-06-02 19:30:29.498072 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-02 19:30:29.498085 | orchestrator | 2025-06-02 19:30:29.498098 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-02 19:30:29.498352 | orchestrator | 2025-06-02 19:30:29.498371 | orchestrator | + echo 2025-06-02 19:30:29.498384 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 19:30:29.499197 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 19:30:29.499243 | orchestrator | ++ INTERACTIVE=false 2025-06-02 19:30:29.499285 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 19:30:29.499303 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 19:30:29.499316 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 19:30:29.499329 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 19:30:29.499341 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 19:30:29.499353 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 19:30:29.499365 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 19:30:29.499378 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 19:30:29.499391 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 19:30:29.499411 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 19:30:29.499423 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 19:30:29.499434 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 19:30:29.499456 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 19:30:29.499467 | orchestrator | ++ export ARA=false 2025-06-02 19:30:29.499478 | orchestrator | ++ ARA=false 2025-06-02 19:30:29.499489 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 19:30:29.499500 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 19:30:29.499511 | orchestrator | ++ export TEMPEST=false 2025-06-02 19:30:29.499521 | orchestrator | ++ TEMPEST=false 2025-06-02 19:30:29.499532 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 19:30:29.499542 | orchestrator | ++ IS_ZUUL=true 2025-06-02 19:30:29.499553 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2025-06-02 19:30:29.499564 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2025-06-02 19:30:29.499575 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 19:30:29.499586 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 19:30:29.499596 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 19:30:29.499606 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 19:30:29.499617 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 19:30:29.499632 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 19:30:29.499643 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 19:30:29.499654 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 19:30:29.499665 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-02 19:30:29.558008 | orchestrator | + docker version 2025-06-02 19:30:29.832958 | orchestrator | Client: Docker Engine - Community 2025-06-02 19:30:29.833081 | orchestrator | Version: 27.5.1 2025-06-02 19:30:29.833108 | orchestrator | API version: 1.47 2025-06-02 19:30:29.833127 | orchestrator | Go version: go1.22.11 2025-06-02 19:30:29.833145 | orchestrator | Git commit: 9f9e405 2025-06-02 19:30:29.833162 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 19:30:29.833179 | orchestrator | OS/Arch: linux/amd64 2025-06-02 19:30:29.833195 | orchestrator | Context: default 2025-06-02 19:30:29.833213 | orchestrator | 2025-06-02 19:30:29.833231 | orchestrator | Server: Docker Engine - Community 2025-06-02 19:30:29.833250 | orchestrator | Engine: 2025-06-02 19:30:29.833269 | orchestrator | Version: 27.5.1 2025-06-02 19:30:29.833287 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-02 19:30:29.833342 | orchestrator | Go version: go1.22.11 2025-06-02 19:30:29.833362 | orchestrator | Git commit: 4c9b3b0 2025-06-02 19:30:29.833381 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 19:30:29.833400 | orchestrator | OS/Arch: linux/amd64 2025-06-02 19:30:29.833418 | orchestrator | Experimental: false 2025-06-02 19:30:29.833436 | orchestrator | containerd: 2025-06-02 19:30:29.833657 | orchestrator | Version: 1.7.27 2025-06-02 19:30:29.833686 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-02 19:30:29.833707 | orchestrator | runc: 2025-06-02 19:30:29.833727 | orchestrator | Version: 1.2.5 2025-06-02 19:30:29.833747 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-02 19:30:29.833768 | orchestrator | docker-init: 2025-06-02 19:30:29.833789 | orchestrator | Version: 0.19.0 2025-06-02 19:30:29.833846 | orchestrator | GitCommit: de40ad0 2025-06-02 19:30:29.835867 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-02 19:30:29.844421 | orchestrator | + set -e 2025-06-02 19:30:29.844494 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 19:30:29.844508 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 19:30:29.844519 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 19:30:29.844530 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 19:30:29.844540 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 19:30:29.844552 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 19:30:29.844564 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 19:30:29.844575 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 19:30:29.844585 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 19:30:29.844596 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 19:30:29.844607 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 19:30:29.844618 | orchestrator | ++ export ARA=false 2025-06-02 19:30:29.844629 | orchestrator | ++ ARA=false 2025-06-02 19:30:29.844639 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 19:30:29.844650 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 19:30:29.844660 | orchestrator | ++ export TEMPEST=false 2025-06-02 19:30:29.844671 | orchestrator | ++ TEMPEST=false 2025-06-02 19:30:29.844681 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 19:30:29.844692 | orchestrator | ++ IS_ZUUL=true 2025-06-02 19:30:29.844703 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2025-06-02 19:30:29.844713 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2025-06-02 19:30:29.844724 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 19:30:29.844735 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 19:30:29.844745 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 19:30:29.844756 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 19:30:29.844767 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 19:30:29.844778 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 19:30:29.844788 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 19:30:29.844824 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 19:30:29.844837 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 19:30:29.844848 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 19:30:29.844858 | orchestrator | ++ INTERACTIVE=false 2025-06-02 19:30:29.844869 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 19:30:29.844884 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 19:30:29.844895 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 19:30:29.844906 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-06-02 19:30:29.851070 | orchestrator | + set -e 2025-06-02 19:30:29.851101 | orchestrator | + VERSION=9.1.0 2025-06-02 19:30:29.851114 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-06-02 19:30:29.859329 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 19:30:29.859368 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-02 19:30:29.862061 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-02 19:30:29.864084 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-06-02 19:30:29.872615 | orchestrator | /opt/configuration ~ 2025-06-02 19:30:29.872641 | orchestrator | + set -e 2025-06-02 19:30:29.872653 | orchestrator | + pushd /opt/configuration 2025-06-02 19:30:29.872664 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 19:30:29.874467 | orchestrator | + source /opt/venv/bin/activate 2025-06-02 19:30:29.875513 | orchestrator | ++ deactivate nondestructive 2025-06-02 19:30:29.875531 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:30:29.875549 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:30:29.875583 | orchestrator | ++ hash -r 2025-06-02 19:30:29.875598 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:30:29.875609 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-02 19:30:29.875623 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-02 19:30:29.875665 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-02 19:30:29.875922 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-02 19:30:29.875938 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-02 19:30:29.875949 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-02 19:30:29.875959 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-02 19:30:29.876090 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 19:30:29.876209 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 19:30:29.876223 | orchestrator | ++ export PATH 2025-06-02 19:30:29.876235 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:30:29.876246 | orchestrator | ++ '[' -z '' ']' 2025-06-02 19:30:29.876256 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-02 19:30:29.876271 | orchestrator | ++ PS1='(venv) ' 2025-06-02 19:30:29.876282 | orchestrator | ++ export PS1 2025-06-02 19:30:29.876293 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-02 19:30:29.876303 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-02 19:30:29.876314 | orchestrator | ++ hash -r 2025-06-02 19:30:29.876328 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-06-02 19:30:30.887314 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-06-02 19:30:30.889421 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-06-02 19:30:30.890052 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-06-02 19:30:30.891449 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-06-02 19:30:30.892676 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-06-02 19:30:30.902933 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-06-02 19:30:30.904683 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-06-02 19:30:30.905977 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-06-02 19:30:30.907100 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-06-02 19:30:30.938595 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-06-02 19:30:30.940004 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-06-02 19:30:30.942403 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-06-02 19:30:30.943436 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-06-02 19:30:30.947585 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-06-02 19:30:31.162676 | orchestrator | ++ which gilt 2025-06-02 19:30:31.165302 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-06-02 19:30:31.165350 | orchestrator | + /opt/venv/bin/gilt overlay 2025-06-02 19:30:31.398498 | orchestrator | osism.cfg-generics: 2025-06-02 19:30:31.563927 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-06-02 19:30:31.564045 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-06-02 19:30:31.564654 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-06-02 19:30:31.564673 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-06-02 19:30:32.597650 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-06-02 19:30:32.609872 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-06-02 19:30:32.938312 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-06-02 19:30:32.985283 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 19:30:32.985366 | orchestrator | + deactivate 2025-06-02 19:30:32.985381 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-02 19:30:32.985394 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 19:30:32.985405 | orchestrator | + export PATH 2025-06-02 19:30:32.985416 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-02 19:30:32.985428 | orchestrator | + '[' -n '' ']' 2025-06-02 19:30:32.985441 | orchestrator | + hash -r 2025-06-02 19:30:32.985452 | orchestrator | + '[' -n '' ']' 2025-06-02 19:30:32.985462 | orchestrator | + unset VIRTUAL_ENV 2025-06-02 19:30:32.985473 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-02 19:30:32.985484 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-02 19:30:32.985494 | orchestrator | + unset -f deactivate 2025-06-02 19:30:32.985517 | orchestrator | + popd 2025-06-02 19:30:32.985528 | orchestrator | ~ 2025-06-02 19:30:32.987410 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-02 19:30:32.987434 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-02 19:30:32.987851 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 19:30:33.042905 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 19:30:33.042971 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-02 19:30:33.042986 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-02 19:30:33.080057 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 19:30:33.080125 | orchestrator | + source /opt/venv/bin/activate 2025-06-02 19:30:33.080137 | orchestrator | ++ deactivate nondestructive 2025-06-02 19:30:33.080157 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:30:33.080297 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:30:33.080317 | orchestrator | ++ hash -r 2025-06-02 19:30:33.080452 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:30:33.080468 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-02 19:30:33.080479 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-02 19:30:33.080494 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-02 19:30:33.080731 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-02 19:30:33.080747 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-02 19:30:33.080824 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-02 19:30:33.080838 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-02 19:30:33.080854 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 19:30:33.080896 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 19:30:33.080985 | orchestrator | ++ export PATH 2025-06-02 19:30:33.081120 | orchestrator | ++ '[' -n '' ']' 2025-06-02 19:30:33.081139 | orchestrator | ++ '[' -z '' ']' 2025-06-02 19:30:33.081408 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-02 19:30:33.081423 | orchestrator | ++ PS1='(venv) ' 2025-06-02 19:30:33.081434 | orchestrator | ++ export PS1 2025-06-02 19:30:33.081445 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-02 19:30:33.081456 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-02 19:30:33.081467 | orchestrator | ++ hash -r 2025-06-02 19:30:33.081641 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-02 19:30:34.196089 | orchestrator | 2025-06-02 19:30:34.196216 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-02 19:30:34.196235 | orchestrator | 2025-06-02 19:30:34.196247 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 19:30:34.813956 | orchestrator | ok: [testbed-manager] 2025-06-02 19:30:34.814137 | orchestrator | 2025-06-02 19:30:34.814159 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 19:30:35.838101 | orchestrator | changed: [testbed-manager] 2025-06-02 19:30:35.838240 | orchestrator | 2025-06-02 19:30:35.838269 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-02 19:30:35.838288 | orchestrator | 2025-06-02 19:30:35.838307 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:30:38.185826 | orchestrator | ok: [testbed-manager] 2025-06-02 19:30:38.185944 | orchestrator | 2025-06-02 19:30:38.185962 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-02 19:30:38.232345 | orchestrator | ok: [testbed-manager] 2025-06-02 19:30:38.232453 | orchestrator | 2025-06-02 19:30:38.232472 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-02 19:30:38.719497 | orchestrator | changed: [testbed-manager] 2025-06-02 19:30:38.719599 | orchestrator | 2025-06-02 19:30:38.719617 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-02 19:30:38.760964 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:30:38.761042 | orchestrator | 2025-06-02 19:30:38.761055 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 19:30:39.120119 | orchestrator | changed: [testbed-manager] 2025-06-02 19:30:39.120214 | orchestrator | 2025-06-02 19:30:39.120229 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-02 19:30:39.172577 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:30:39.172687 | orchestrator | 2025-06-02 19:30:39.172709 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-02 19:30:39.504413 | orchestrator | ok: [testbed-manager] 2025-06-02 19:30:39.504521 | orchestrator | 2025-06-02 19:30:39.504536 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-02 19:30:39.615845 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:30:39.615940 | orchestrator | 2025-06-02 19:30:39.615955 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-02 19:30:39.615969 | orchestrator | 2025-06-02 19:30:39.615980 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:30:41.424236 | orchestrator | ok: [testbed-manager] 2025-06-02 19:30:41.424348 | orchestrator | 2025-06-02 19:30:41.424365 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-02 19:30:41.529015 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-02 19:30:41.529111 | orchestrator | 2025-06-02 19:30:41.529126 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-02 19:30:41.586169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-02 19:30:41.586264 | orchestrator | 2025-06-02 19:30:41.586278 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-02 19:30:42.669522 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-02 19:30:42.669624 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-02 19:30:42.669642 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-02 19:30:42.669654 | orchestrator | 2025-06-02 19:30:42.669666 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-02 19:30:44.465889 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-02 19:30:44.466006 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-02 19:30:44.466076 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-02 19:30:44.466089 | orchestrator | 2025-06-02 19:30:44.466102 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-02 19:30:45.085847 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:30:45.085951 | orchestrator | changed: [testbed-manager] 2025-06-02 19:30:45.085967 | orchestrator | 2025-06-02 19:30:45.085980 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-02 19:30:45.740917 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:30:45.741039 | orchestrator | changed: [testbed-manager] 2025-06-02 19:30:45.741056 | orchestrator | 2025-06-02 19:30:45.741068 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-02 19:30:45.798112 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:30:45.798215 | orchestrator | 2025-06-02 19:30:45.798238 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-02 19:30:46.177229 | orchestrator | ok: [testbed-manager] 2025-06-02 19:30:46.177328 | orchestrator | 2025-06-02 19:30:46.177343 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-02 19:30:46.256580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-02 19:30:46.256666 | orchestrator | 2025-06-02 19:30:46.256681 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-02 19:30:47.314857 | orchestrator | changed: [testbed-manager] 2025-06-02 19:30:47.314966 | orchestrator | 2025-06-02 19:30:47.314994 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-02 19:30:48.180123 | orchestrator | changed: [testbed-manager] 2025-06-02 19:30:48.180236 | orchestrator | 2025-06-02 19:30:48.180255 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-02 19:30:59.462989 | orchestrator | changed: [testbed-manager] 2025-06-02 19:30:59.463147 | orchestrator | 2025-06-02 19:30:59.463184 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-02 19:30:59.515153 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:30:59.515229 | orchestrator | 2025-06-02 19:30:59.515243 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-02 19:30:59.515255 | orchestrator | 2025-06-02 19:30:59.515267 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:31:01.410277 | orchestrator | ok: [testbed-manager] 2025-06-02 19:31:01.410376 | orchestrator | 2025-06-02 19:31:01.410392 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-02 19:31:01.523242 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-02 19:31:01.523311 | orchestrator | 2025-06-02 19:31:01.523324 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-02 19:31:01.582355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 19:31:01.582425 | orchestrator | 2025-06-02 19:31:01.582439 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-02 19:31:04.158519 | orchestrator | ok: [testbed-manager] 2025-06-02 19:31:04.158628 | orchestrator | 2025-06-02 19:31:04.158645 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-02 19:31:04.211718 | orchestrator | ok: [testbed-manager] 2025-06-02 19:31:04.211844 | orchestrator | 2025-06-02 19:31:04.211865 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-02 19:31:04.334738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-02 19:31:04.334868 | orchestrator | 2025-06-02 19:31:04.334885 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-02 19:31:07.211220 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-02 19:31:07.211401 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-02 19:31:07.211421 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-02 19:31:07.211433 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-02 19:31:07.211443 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-02 19:31:07.211455 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-02 19:31:07.211465 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-02 19:31:07.211476 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-02 19:31:07.211487 | orchestrator | 2025-06-02 19:31:07.211502 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-02 19:31:07.854265 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:07.854367 | orchestrator | 2025-06-02 19:31:07.854402 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-02 19:31:08.527139 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:08.527230 | orchestrator | 2025-06-02 19:31:08.527246 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-02 19:31:08.612358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-02 19:31:08.612447 | orchestrator | 2025-06-02 19:31:08.612463 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-02 19:31:09.850896 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-02 19:31:09.851008 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-02 19:31:09.851024 | orchestrator | 2025-06-02 19:31:09.851037 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-02 19:31:10.490143 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:10.490247 | orchestrator | 2025-06-02 19:31:10.490263 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-02 19:31:10.546148 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:31:10.546230 | orchestrator | 2025-06-02 19:31:10.546243 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-02 19:31:10.597520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-02 19:31:10.597589 | orchestrator | 2025-06-02 19:31:10.597602 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-02 19:31:12.014152 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:31:12.014247 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:31:12.014258 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:12.014267 | orchestrator | 2025-06-02 19:31:12.014274 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-02 19:31:12.655118 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:12.655205 | orchestrator | 2025-06-02 19:31:12.655215 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-02 19:31:12.703903 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:31:12.703977 | orchestrator | 2025-06-02 19:31:12.703987 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-02 19:31:12.796430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-02 19:31:12.796504 | orchestrator | 2025-06-02 19:31:12.796513 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-02 19:31:13.353376 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:13.353486 | orchestrator | 2025-06-02 19:31:13.353504 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-02 19:31:13.762246 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:13.762351 | orchestrator | 2025-06-02 19:31:13.762367 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-02 19:31:15.001304 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-02 19:31:15.001920 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-02 19:31:15.001945 | orchestrator | 2025-06-02 19:31:15.001960 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-02 19:31:15.716380 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:15.716519 | orchestrator | 2025-06-02 19:31:15.716547 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-02 19:31:16.144997 | orchestrator | ok: [testbed-manager] 2025-06-02 19:31:16.145098 | orchestrator | 2025-06-02 19:31:16.145116 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-02 19:31:16.510811 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:16.510909 | orchestrator | 2025-06-02 19:31:16.510924 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-02 19:31:16.558236 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:31:16.558318 | orchestrator | 2025-06-02 19:31:16.558331 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-02 19:31:16.627141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-02 19:31:16.627238 | orchestrator | 2025-06-02 19:31:16.627254 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-02 19:31:16.674076 | orchestrator | ok: [testbed-manager] 2025-06-02 19:31:16.674150 | orchestrator | 2025-06-02 19:31:16.674163 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-02 19:31:18.688231 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-02 19:31:18.688373 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-02 19:31:18.688391 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-02 19:31:18.688403 | orchestrator | 2025-06-02 19:31:18.688415 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-02 19:31:19.380836 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:19.380941 | orchestrator | 2025-06-02 19:31:19.380956 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-02 19:31:20.083701 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:20.083905 | orchestrator | 2025-06-02 19:31:20.083936 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-02 19:31:20.804336 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:20.804447 | orchestrator | 2025-06-02 19:31:20.804463 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-02 19:31:20.885588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-02 19:31:20.885680 | orchestrator | 2025-06-02 19:31:20.885695 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-02 19:31:20.928749 | orchestrator | ok: [testbed-manager] 2025-06-02 19:31:20.928877 | orchestrator | 2025-06-02 19:31:20.928891 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-02 19:31:21.673853 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-02 19:31:21.673959 | orchestrator | 2025-06-02 19:31:21.673975 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-02 19:31:21.758332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-02 19:31:21.758426 | orchestrator | 2025-06-02 19:31:21.758440 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-02 19:31:22.473531 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:22.473631 | orchestrator | 2025-06-02 19:31:22.473645 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-02 19:31:23.097295 | orchestrator | ok: [testbed-manager] 2025-06-02 19:31:23.097394 | orchestrator | 2025-06-02 19:31:23.097410 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-02 19:31:23.159231 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:31:23.159334 | orchestrator | 2025-06-02 19:31:23.159351 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-02 19:31:23.219384 | orchestrator | ok: [testbed-manager] 2025-06-02 19:31:23.219477 | orchestrator | 2025-06-02 19:31:23.219492 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-02 19:31:24.123682 | orchestrator | changed: [testbed-manager] 2025-06-02 19:31:24.123825 | orchestrator | 2025-06-02 19:31:24.123840 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-02 19:32:24.852814 | orchestrator | changed: [testbed-manager] 2025-06-02 19:32:24.852935 | orchestrator | 2025-06-02 19:32:24.852951 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-02 19:32:25.870350 | orchestrator | ok: [testbed-manager] 2025-06-02 19:32:25.870402 | orchestrator | 2025-06-02 19:32:25.870408 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-02 19:32:25.924425 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:32:25.924495 | orchestrator | 2025-06-02 19:32:25.924505 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-02 19:32:28.672603 | orchestrator | changed: [testbed-manager] 2025-06-02 19:32:28.672764 | orchestrator | 2025-06-02 19:32:28.672783 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-02 19:32:28.745219 | orchestrator | ok: [testbed-manager] 2025-06-02 19:32:28.745329 | orchestrator | 2025-06-02 19:32:28.745346 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 19:32:28.745359 | orchestrator | 2025-06-02 19:32:28.745370 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-02 19:32:28.810941 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:32:28.811020 | orchestrator | 2025-06-02 19:32:28.811062 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-02 19:33:28.856446 | orchestrator | Pausing for 60 seconds 2025-06-02 19:33:28.856566 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:28.856582 | orchestrator | 2025-06-02 19:33:28.856596 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-02 19:33:33.433975 | orchestrator | changed: [testbed-manager] 2025-06-02 19:33:33.434148 | orchestrator | 2025-06-02 19:33:33.434165 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-02 19:34:15.005588 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-02 19:34:15.005755 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-02 19:34:15.005771 | orchestrator | changed: [testbed-manager] 2025-06-02 19:34:15.005785 | orchestrator | 2025-06-02 19:34:15.005797 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-02 19:34:23.904778 | orchestrator | changed: [testbed-manager] 2025-06-02 19:34:23.904928 | orchestrator | 2025-06-02 19:34:23.904971 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-02 19:34:23.996081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-02 19:34:23.996194 | orchestrator | 2025-06-02 19:34:23.996209 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 19:34:23.996221 | orchestrator | 2025-06-02 19:34:23.996232 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-02 19:34:24.046796 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:34:24.046908 | orchestrator | 2025-06-02 19:34:24.046922 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:34:24.046937 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 19:34:24.046949 | orchestrator | 2025-06-02 19:34:24.170381 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 19:34:24.170491 | orchestrator | + deactivate 2025-06-02 19:34:24.170506 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-02 19:34:24.170520 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 19:34:24.170532 | orchestrator | + export PATH 2025-06-02 19:34:24.170550 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-02 19:34:24.170562 | orchestrator | + '[' -n '' ']' 2025-06-02 19:34:24.170574 | orchestrator | + hash -r 2025-06-02 19:34:24.170586 | orchestrator | + '[' -n '' ']' 2025-06-02 19:34:24.170597 | orchestrator | + unset VIRTUAL_ENV 2025-06-02 19:34:24.170608 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-02 19:34:24.170619 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-02 19:34:24.170630 | orchestrator | + unset -f deactivate 2025-06-02 19:34:24.170642 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-02 19:34:24.174965 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 19:34:24.174993 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 19:34:24.175005 | orchestrator | + local max_attempts=60 2025-06-02 19:34:24.175016 | orchestrator | + local name=ceph-ansible 2025-06-02 19:34:24.175027 | orchestrator | + local attempt_num=1 2025-06-02 19:34:24.176227 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:34:24.218834 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:34:24.218930 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 19:34:24.218944 | orchestrator | + local max_attempts=60 2025-06-02 19:34:24.218957 | orchestrator | + local name=kolla-ansible 2025-06-02 19:34:24.218968 | orchestrator | + local attempt_num=1 2025-06-02 19:34:24.219772 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 19:34:24.261162 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:34:24.261252 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 19:34:24.261269 | orchestrator | + local max_attempts=60 2025-06-02 19:34:24.261281 | orchestrator | + local name=osism-ansible 2025-06-02 19:34:24.261292 | orchestrator | + local attempt_num=1 2025-06-02 19:34:24.262127 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 19:34:24.298241 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:34:24.298280 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 19:34:24.298291 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 19:34:25.008840 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-02 19:34:25.188433 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-02 19:34:25.188558 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-02 19:34:25.188575 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-02 19:34:25.188587 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-02 19:34:25.188600 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-02 19:34:25.188611 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-02 19:34:25.188622 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-02 19:34:25.188632 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-06-02 19:34:25.188643 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-02 19:34:25.188707 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-02 19:34:25.188718 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-02 19:34:25.188728 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-02 19:34:25.188739 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-02 19:34:25.188750 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-02 19:34:25.188761 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-02 19:34:25.196339 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 19:34:25.240208 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 19:34:25.240243 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-02 19:34:25.246578 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-02 19:34:26.967238 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:34:26.967393 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:34:26.967408 | orchestrator | Registering Redlock._release_script 2025-06-02 19:34:27.163558 | orchestrator | 2025-06-02 19:34:27 | INFO  | Task a551bb3a-bf48-4624-9151-e03ce6150c05 (resolvconf) was prepared for execution. 2025-06-02 19:34:27.163713 | orchestrator | 2025-06-02 19:34:27 | INFO  | It takes a moment until task a551bb3a-bf48-4624-9151-e03ce6150c05 (resolvconf) has been started and output is visible here. 2025-06-02 19:34:31.173595 | orchestrator | 2025-06-02 19:34:31.174266 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-02 19:34:31.174960 | orchestrator | 2025-06-02 19:34:31.176408 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:34:31.177030 | orchestrator | Monday 02 June 2025 19:34:31 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-06-02 19:34:34.913605 | orchestrator | ok: [testbed-manager] 2025-06-02 19:34:34.914204 | orchestrator | 2025-06-02 19:34:34.915032 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 19:34:34.915548 | orchestrator | Monday 02 June 2025 19:34:34 +0000 (0:00:03.741) 0:00:03.898 *********** 2025-06-02 19:34:34.980272 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:34:34.981245 | orchestrator | 2025-06-02 19:34:34.981515 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 19:34:34.983867 | orchestrator | Monday 02 June 2025 19:34:34 +0000 (0:00:00.067) 0:00:03.966 *********** 2025-06-02 19:34:35.064923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-02 19:34:35.065433 | orchestrator | 2025-06-02 19:34:35.066434 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 19:34:35.067222 | orchestrator | Monday 02 June 2025 19:34:35 +0000 (0:00:00.084) 0:00:04.051 *********** 2025-06-02 19:34:35.129046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 19:34:35.129706 | orchestrator | 2025-06-02 19:34:35.130601 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 19:34:35.131300 | orchestrator | Monday 02 June 2025 19:34:35 +0000 (0:00:00.064) 0:00:04.115 *********** 2025-06-02 19:34:36.183167 | orchestrator | ok: [testbed-manager] 2025-06-02 19:34:36.183559 | orchestrator | 2025-06-02 19:34:36.186007 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 19:34:36.186628 | orchestrator | Monday 02 June 2025 19:34:36 +0000 (0:00:01.052) 0:00:05.167 *********** 2025-06-02 19:34:36.238494 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:34:36.238552 | orchestrator | 2025-06-02 19:34:36.239179 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 19:34:36.240137 | orchestrator | Monday 02 June 2025 19:34:36 +0000 (0:00:00.056) 0:00:05.224 *********** 2025-06-02 19:34:36.720219 | orchestrator | ok: [testbed-manager] 2025-06-02 19:34:36.720840 | orchestrator | 2025-06-02 19:34:36.721418 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 19:34:36.722269 | orchestrator | Monday 02 June 2025 19:34:36 +0000 (0:00:00.482) 0:00:05.707 *********** 2025-06-02 19:34:36.799888 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:34:36.800916 | orchestrator | 2025-06-02 19:34:36.802138 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 19:34:36.802793 | orchestrator | Monday 02 June 2025 19:34:36 +0000 (0:00:00.078) 0:00:05.785 *********** 2025-06-02 19:34:37.335310 | orchestrator | changed: [testbed-manager] 2025-06-02 19:34:37.336465 | orchestrator | 2025-06-02 19:34:37.337294 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 19:34:37.337999 | orchestrator | Monday 02 June 2025 19:34:37 +0000 (0:00:00.535) 0:00:06.321 *********** 2025-06-02 19:34:38.383215 | orchestrator | changed: [testbed-manager] 2025-06-02 19:34:38.384431 | orchestrator | 2025-06-02 19:34:38.385384 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 19:34:38.386388 | orchestrator | Monday 02 June 2025 19:34:38 +0000 (0:00:01.046) 0:00:07.368 *********** 2025-06-02 19:34:39.364608 | orchestrator | ok: [testbed-manager] 2025-06-02 19:34:39.365465 | orchestrator | 2025-06-02 19:34:39.367031 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 19:34:39.367645 | orchestrator | Monday 02 June 2025 19:34:39 +0000 (0:00:00.981) 0:00:08.349 *********** 2025-06-02 19:34:39.449960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-02 19:34:39.451962 | orchestrator | 2025-06-02 19:34:39.452310 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 19:34:39.453271 | orchestrator | Monday 02 June 2025 19:34:39 +0000 (0:00:00.086) 0:00:08.436 *********** 2025-06-02 19:34:40.603292 | orchestrator | changed: [testbed-manager] 2025-06-02 19:34:40.603416 | orchestrator | 2025-06-02 19:34:40.603832 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:34:40.604185 | orchestrator | 2025-06-02 19:34:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:34:40.604716 | orchestrator | 2025-06-02 19:34:40 | INFO  | Please wait and do not abort execution. 2025-06-02 19:34:40.605068 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 19:34:40.605492 | orchestrator | 2025-06-02 19:34:40.605764 | orchestrator | 2025-06-02 19:34:40.606252 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:34:40.606677 | orchestrator | Monday 02 June 2025 19:34:40 +0000 (0:00:01.150) 0:00:09.587 *********** 2025-06-02 19:34:40.607504 | orchestrator | =============================================================================== 2025-06-02 19:34:40.607557 | orchestrator | Gathering Facts --------------------------------------------------------- 3.74s 2025-06-02 19:34:40.608065 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2025-06-02 19:34:40.608161 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.05s 2025-06-02 19:34:40.608435 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2025-06-02 19:34:40.608756 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2025-06-02 19:34:40.609148 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2025-06-02 19:34:40.609430 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-06-02 19:34:40.609804 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-06-02 19:34:40.610106 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-06-02 19:34:40.610400 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-06-02 19:34:40.610711 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-06-02 19:34:40.611053 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2025-06-02 19:34:40.611244 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-06-02 19:34:41.092486 | orchestrator | + osism apply sshconfig 2025-06-02 19:34:42.631930 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:34:42.632036 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:34:42.632053 | orchestrator | Registering Redlock._release_script 2025-06-02 19:34:42.690807 | orchestrator | 2025-06-02 19:34:42 | INFO  | Task 2593ddcf-1be6-4af6-8e75-1e5ce275da91 (sshconfig) was prepared for execution. 2025-06-02 19:34:42.690911 | orchestrator | 2025-06-02 19:34:42 | INFO  | It takes a moment until task 2593ddcf-1be6-4af6-8e75-1e5ce275da91 (sshconfig) has been started and output is visible here. 2025-06-02 19:34:46.259105 | orchestrator | 2025-06-02 19:34:46.259472 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-02 19:34:46.260578 | orchestrator | 2025-06-02 19:34:46.261526 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-02 19:34:46.263145 | orchestrator | Monday 02 June 2025 19:34:46 +0000 (0:00:00.145) 0:00:00.145 *********** 2025-06-02 19:34:46.793491 | orchestrator | ok: [testbed-manager] 2025-06-02 19:34:46.793758 | orchestrator | 2025-06-02 19:34:46.794592 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-02 19:34:46.795137 | orchestrator | Monday 02 June 2025 19:34:46 +0000 (0:00:00.535) 0:00:00.681 *********** 2025-06-02 19:34:47.319161 | orchestrator | changed: [testbed-manager] 2025-06-02 19:34:47.319236 | orchestrator | 2025-06-02 19:34:47.321266 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-02 19:34:47.321280 | orchestrator | Monday 02 June 2025 19:34:47 +0000 (0:00:00.526) 0:00:01.207 *********** 2025-06-02 19:34:53.000137 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-02 19:34:53.001502 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-02 19:34:53.002927 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-02 19:34:53.003721 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-02 19:34:53.004558 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-02 19:34:53.005487 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-02 19:34:53.006126 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-02 19:34:53.007033 | orchestrator | 2025-06-02 19:34:53.008527 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-02 19:34:53.009786 | orchestrator | Monday 02 June 2025 19:34:52 +0000 (0:00:05.678) 0:00:06.885 *********** 2025-06-02 19:34:53.091289 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:34:53.091369 | orchestrator | 2025-06-02 19:34:53.092324 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-02 19:34:53.092422 | orchestrator | Monday 02 June 2025 19:34:53 +0000 (0:00:00.092) 0:00:06.978 *********** 2025-06-02 19:34:53.667086 | orchestrator | changed: [testbed-manager] 2025-06-02 19:34:53.667227 | orchestrator | 2025-06-02 19:34:53.667305 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:34:53.667706 | orchestrator | 2025-06-02 19:34:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:34:53.668090 | orchestrator | 2025-06-02 19:34:53 | INFO  | Please wait and do not abort execution. 2025-06-02 19:34:53.669013 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:34:53.669999 | orchestrator | 2025-06-02 19:34:53.671350 | orchestrator | 2025-06-02 19:34:53.672142 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:34:53.672708 | orchestrator | Monday 02 June 2025 19:34:53 +0000 (0:00:00.577) 0:00:07.555 *********** 2025-06-02 19:34:53.673606 | orchestrator | =============================================================================== 2025-06-02 19:34:53.674170 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.68s 2025-06-02 19:34:53.674557 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-06-02 19:34:53.675014 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2025-06-02 19:34:53.675510 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-06-02 19:34:53.675995 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2025-06-02 19:34:54.111796 | orchestrator | + osism apply known-hosts 2025-06-02 19:34:55.801116 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:34:55.801250 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:34:55.801265 | orchestrator | Registering Redlock._release_script 2025-06-02 19:34:55.862069 | orchestrator | 2025-06-02 19:34:55 | INFO  | Task ea1db522-97d6-4a33-9784-9c937975c2e9 (known-hosts) was prepared for execution. 2025-06-02 19:34:55.862159 | orchestrator | 2025-06-02 19:34:55 | INFO  | It takes a moment until task ea1db522-97d6-4a33-9784-9c937975c2e9 (known-hosts) has been started and output is visible here. 2025-06-02 19:34:59.906458 | orchestrator | 2025-06-02 19:34:59.907182 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-02 19:34:59.907741 | orchestrator | 2025-06-02 19:34:59.909033 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-02 19:34:59.909966 | orchestrator | Monday 02 June 2025 19:34:59 +0000 (0:00:00.140) 0:00:00.140 *********** 2025-06-02 19:35:05.545455 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 19:35:05.545600 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 19:35:05.545833 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 19:35:05.546450 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 19:35:05.547167 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 19:35:05.548191 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 19:35:05.548623 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 19:35:05.549131 | orchestrator | 2025-06-02 19:35:05.549589 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-02 19:35:05.550102 | orchestrator | Monday 02 June 2025 19:35:05 +0000 (0:00:05.639) 0:00:05.780 *********** 2025-06-02 19:35:05.697825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 19:35:05.698156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 19:35:05.698531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 19:35:05.699171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 19:35:05.699502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 19:35:05.699961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 19:35:05.700338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 19:35:05.700695 | orchestrator | 2025-06-02 19:35:05.701181 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:05.701484 | orchestrator | Monday 02 June 2025 19:35:05 +0000 (0:00:00.152) 0:00:05.932 *********** 2025-06-02 19:35:06.761054 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG+pov6gC9fwlur0So0w9scAaCytqboiltSLIXqAASVm) 2025-06-02 19:35:06.762754 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx67AUFw/g5C5lnIbFaaN5rbstpC47Lg/nxNGmuE10OCTAwadQhPEQTh66iUJ4JR6TFQV939VSSheGG8CLnUFWWjIwGyi/4L7eqJIvxV8DJCdK3uEY0X1jpe1EjuGyE307OaB3vzq/oKg86DKm5Ce1oaBOpHKaZ+3rkyY56I67HEaT86+5WGal/02AoGDLVslAprJ4+Fo6hSbHXbwmrNbgu2n40CGxdXPIHTiDkwqYovlLrXnEtmSXicOTthSEfT9fy2LeLCFTLS3Ka4S3U4Rdd7qwAiVUFaBAaEkdpPt4S6OZxCTOFKESyADZcllJEHo/tNDuz5fFnZO+rfGOPSuLbnVmkVDlOZetm9+IaSRZZBhTjKklt9JX049wpejQCg6A5Hbk7ZTirAfz/+TFWm8hLYuKy89qGqWOb4gRCQma5CPRHJF9TryZuvr5K1V8ME0xIhD/B2q26IS2zblr2YlGwZOfIRkUmQWl6NVVgKInc7CPOQE/TrV6Woz4g7sJZkc=) 2025-06-02 19:35:06.763464 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAvQKe0TXFsJEAW7TQgdsIpqqQ7oLGhEZAtJ6mRRE3fWCfe4mtydqP/hQOFYIp7HyGh67YLub/XTdJMtuDv2wWI=) 2025-06-02 19:35:06.764152 | orchestrator | 2025-06-02 19:35:06.764809 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:06.765531 | orchestrator | Monday 02 June 2025 19:35:06 +0000 (0:00:01.062) 0:00:06.995 *********** 2025-06-02 19:35:07.701326 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPrYz82Q1zd833od2EsJztcnvrcgcI5VR4I5sQLBx1dGwv5fSHl5Ji9FFR3ee+RDBlUAaumySum75AQnlJMwCSA=) 2025-06-02 19:35:07.701650 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiz1QCzRs91nSw1leWxEnWyCRjn9crhBtXyvjaJNChFjdMNTmtftmBl5FkF0rcDsDnrSoxKf7hLX0RM7Gdr9/2YC2DS3AWTtDVxRB70KOy97zh+oFUKlFxFgLmj6hWrz/qvP5Nr1oV0PIDPx5wSiO+E78lwCTYJvRhEoXPb7IvuCwTESrHpC/jRJjNrsCIBbRF3sRyN+jq2xKeitXUyQgrA8jRpPHAXst2d4fo4GK6oHIi+r0k1neuTmg54ex+HNfP1cRL4B0/sxLpXYeM8ha08QQBiVbYxjCgs3MRbFPS2ZXTc7yx8JX9PsmlpudbkBqTRUm1i2tZK+PGqWiHSvnqcj7A4wdw1jlY6RANQuXXACzvsQSitiF3B7RrRFaaY1Rauqqap8Sr3kCs3AGNf6a/t/jQc3CVeySFw9kHOLCFLi9y/AWRw+HI3ZyGfuaBrjx59tDHZMTFo6pSIGaNP4ZgJBGbn65mILzQyUo9AUk6Tbd84FDEocMosBqRfK3GtxE=) 2025-06-02 19:35:07.701726 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIER7Pl1SJ3yMxdl/V2I/B6huxKr71gPMTF3LPeuHiLKP) 2025-06-02 19:35:07.701825 | orchestrator | 2025-06-02 19:35:07.701959 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:07.701976 | orchestrator | Monday 02 June 2025 19:35:07 +0000 (0:00:00.940) 0:00:07.935 *********** 2025-06-02 19:35:08.692026 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCMr6Bsg1aW0Uwgv4YAd7lSB0YTj8VnipfMv4nFb07rXAXXmkBI5G+iEsPnAs0fNuQ1DG5BA1akTAcYyaq1KSFQr6mDVhWbVH6f2TDuQ3YJkezrYUH4yK/5QNxwU9Y4Jk2r+MtXTZ72lQjARAsxbQaeVNNv/WoYyJaZ3Nq6yHVaHmlNnaB5RrAkkJzR/e1C4yaCK1mOF/z08bQPbBGkQXLNxoyL4pI64+1mmRsIoHmfU89tdAhx2lyJZ0Pl+YYAmgFxleb55Ar0H0dxthQN1N6obc4H5OQg2tf3pSEeWymx9KimcvpDYjyJnTFNycD/m5KbGrFsExC0oinmDGT70pVXal2rjyJ5uc7SVjCwYMi80Hd4RS58k++yUPnTDME9y+dFrYxA7XyGDbXjfYSbtvMoXvQk/9FUAs4k3utuvqyDGaiXwMrrq8/evh08blNTqFFfain4aZ8WFD1oWkSYIcFNjzJ3qPIAeJxq5vHrC7ko7bRLhNC4Jy5YeCq1ufOy+d8=) 2025-06-02 19:35:08.692250 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKXaMbuQlx8Ul4IXg9DKyzY4zjS0Usz1gp9YCm4iY66P) 2025-06-02 19:35:08.692275 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAxeGlv86uocikYcVtLSacgN+/WRkPDi7v+V0YByL7OBX5chddeA5KhXUueaxPxQ+om+cfZZFoWs6sM8HpVOcSs=) 2025-06-02 19:35:08.692961 | orchestrator | 2025-06-02 19:35:08.694161 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:08.694474 | orchestrator | Monday 02 June 2025 19:35:08 +0000 (0:00:00.992) 0:00:08.928 *********** 2025-06-02 19:35:09.624057 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKhaSEXSocfKFxBKEqJQpwX/zgZUxRRT0p5/0dqDrKpi) 2025-06-02 19:35:09.624226 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCe13EMSGWt4x2Pfp3DNtA+FIyRAKSfxHPz99SKhlDaz9jXHuDL4rTEc6nIK8D9IdnrGCW/fcm813M1QXAS2SyjZh7CEnFuBNh1+H1e4Cbx+WfCCRYkmTyXM8MLjfcMUD0AvXpjbbcbUFL1b8JxH4fT70qgv4e/BypkpJAdD157Gf8PGrM5gpNAjnz1y1NFQ0sMi74CyHef+tKoBVsf94NQB3tSRyUA0mO/xISqFpUk3QnJlWnM3fff7q24Q5YlXy/I63MwPKVxMD78CEXZD/fhw/b95eJYTWvQ0y0HJVDYund8nb2d6cUyR3f+hdK+KBAG30Y6Dk88o/CxLcyqM+p0T88vIM42XltF7mBaeCloxMxFqaZigASsPoDZfleZwNncNy0FTQnYPveuNFgNwJHdzZILZGwYK6mPnuT3Qz9oAaBVn3/rXX6J4FWXYubsbmJy7nY9uZ4cUhuRU9SA7ZUA4c6Lc2fr7uerC5OOa0mM4Ryn7X2npqoajrG/In+FSxE=) 2025-06-02 19:35:09.624351 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOQaLxs72+xZ71wD5JnrN1hWyAWTJcD9xtHjcx9uZ+BU9vXulBLqPRQucbGmBgqFa4XOP5tScPM5qyCP/xeTCyc=) 2025-06-02 19:35:09.625201 | orchestrator | 2025-06-02 19:35:09.625420 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:09.626632 | orchestrator | Monday 02 June 2025 19:35:09 +0000 (0:00:00.931) 0:00:09.860 *********** 2025-06-02 19:35:10.669997 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFQrsTnxtHfCaI3FBps8JGrG3pZF0onIrWvZn9AYmN60cGijb2qqdVKvslnop22PViJlLiSHF0RvfMWhyH/RnYl29sngoRL3p0RuJmdhNnBQfIML2sSTJc/i3FDhTcfuFOmXLmUBdQmRhJP8qSmU67eia0tL5mUczEGWfOs0V1ue2rLTYZ/JRaAO9RgM+VuTzdIAuy2RG4RFGXErm4HsyjyrwpwQ/PkQaOC4bQw5XkppZ89mlKYnmf7FHp9ajmafOhkzRUETscXWwH+EE3OgNu/vJQ97lEt734f4JVZTzn22K8CFo9wCFg6CA0Eb28YhnSwbPUUinsxci4eCQR2k5IxmywPm/agza0SaDoFsJ4Q8wzV8cAWOZuNG3oFKR1BNpMJ5YE5qXbDFT4Se2+6SvXFJ98W6imoYxIXAyEVIqf1xAC3XNq9dJlneP1bU/LZF5UaHpxxRWkxLqKTF0vs/LpOHB0VjoPg++lPqgG9Il132SDNYRTasI53Vi1aCAQSME=) 2025-06-02 19:35:10.670969 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFwwMXm8htCLX654UTjyJkuhzcLxy2NZRnE/1Nd/Qi93) 2025-06-02 19:35:10.671781 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI5bg1sgFVENQNPERWupn3DlJZ0kuBA1jL20F8nfi26mcXZQlfJ7xh0HsyeAjdC4BCTFeal1a2SlUKTrZ2TfFgU=) 2025-06-02 19:35:10.671811 | orchestrator | 2025-06-02 19:35:10.673415 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:10.673440 | orchestrator | Monday 02 June 2025 19:35:10 +0000 (0:00:01.043) 0:00:10.903 *********** 2025-06-02 19:35:11.696540 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1KJyzuoUlwh1uIDiHze9VOEzDjqAw83DdAP4XMNCYji6VaCT2Bds2wMGtZMoflZ0xXfF2e4c6QApKT9AdKGZooqhTxIesIThkcMpVPQzSPYZ4Zis30qLQD1JnaTTONyE8Go/WgFbN8pOiJm+HN889bg3LF2MMUrJ6sICqruj1vusq1Tct4w8Zi8LmRW2C3OGfh4rWNtj2RuNKPayrQIDFTWGuzrcIhy6TIqxJN54ofFItxgm+E67r/YmYNi6cIc40nlq3Plz9uJvuyNY88YkEE/QD2DUjp/BDprvc5aEqwtueF6Sxd8PYXtr8a6B2oOZ048sdqA19PnM03BhCzNj7oB+qsQnYadfoHmP0LW+FL3tzg++KMP0eMVi6yjQKfFTaf0/pf5tEHlcfuR58/Io5/2AooTZGdtM7aI9XO8U9BQkHvk98dklyEtNoexF5Y4c9WDPi2AaM189g2DHOLpbK2liExLH0gDKHL88loJVr/dKaAhvbt8IBXqo7XOJHS4M=) 2025-06-02 19:35:11.697071 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG5iHKbszpk+zlkHSozwxhoB1MXFq+nsP0iO+GsQuwsB) 2025-06-02 19:35:11.697898 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKbMcCuBL4vn24D6h58dRcrailLpMpCeV1XYN0ky9C07JPAmNJiMVrfau+DB6zXIcDuWPx8TvhmEeX0GelOYs08=) 2025-06-02 19:35:11.698606 | orchestrator | 2025-06-02 19:35:11.699361 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:11.700082 | orchestrator | Monday 02 June 2025 19:35:11 +0000 (0:00:01.027) 0:00:11.931 *********** 2025-06-02 19:35:12.784243 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcPkA7qc/8EDmRtj6Ks71XJ2mdYdl0UG549q82e5lV60mbz0GI8eoaQ4TRSAW707li9HxouiWjLMTruEIHpLGIIsWVsxy6LNGs5D0l/m+h0VntlgjlAL7rAZKPwi54/9XqLro70LJFn2SnxCe6DZMrdsm6XPRXpcl9kDNRY3rLvh02W6CTIHOkKn7DIIf5ZYYWIXFqFosOa6ccPgrtYriwg+Tpaky/z0PNgcRvAHX4rBxgaAnamd3sgMHy4M4iO+aQu2XlR1+RUM6rmggVCO7uQelFS3XEPEbChWUuv+7lWIIO3T0HdoFwRfYfoD/lT/xF3sRCQXb75lbN+DfbvI+82gRYZPcie1NKLSPexj520Qe1DWD3FOLADWYS+Q18CMLrybYU3Sjp6/IPzpwIEnBuROEtwr8FYxB6Vc643PuLxvbqqQLzG83KOeSHyoCL7s1F8lxsQINCW0VfYn9zUON8tvgkV6K6lnnK8FV/luIykYxRX6dxIECGy1QHVb/rjCk=) 2025-06-02 19:35:12.785069 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDgojQBoN/X5QoWlNYHeBh1XT2BglYoVnjHJaqbN6Kaepv7C+GtuCixl3U6vuZN2CA67dgd/RHk90yxDDCLW9ic=) 2025-06-02 19:35:12.786188 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAIyH6EDpDD984y1dgjdeABnu8FD2CoElcf0+XnvNHbd) 2025-06-02 19:35:12.786545 | orchestrator | 2025-06-02 19:35:12.787770 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-02 19:35:12.789162 | orchestrator | Monday 02 June 2025 19:35:12 +0000 (0:00:01.087) 0:00:13.019 *********** 2025-06-02 19:35:18.048037 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 19:35:18.048147 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 19:35:18.049322 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 19:35:18.049923 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 19:35:18.050934 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 19:35:18.051461 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 19:35:18.052719 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 19:35:18.052957 | orchestrator | 2025-06-02 19:35:18.054107 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-02 19:35:18.055913 | orchestrator | Monday 02 June 2025 19:35:18 +0000 (0:00:05.262) 0:00:18.281 *********** 2025-06-02 19:35:18.212159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 19:35:18.213057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 19:35:18.214072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 19:35:18.215732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 19:35:18.215834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 19:35:18.215920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 19:35:18.216330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 19:35:18.217218 | orchestrator | 2025-06-02 19:35:18.217612 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:18.218643 | orchestrator | Monday 02 June 2025 19:35:18 +0000 (0:00:00.166) 0:00:18.448 *********** 2025-06-02 19:35:19.298640 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx67AUFw/g5C5lnIbFaaN5rbstpC47Lg/nxNGmuE10OCTAwadQhPEQTh66iUJ4JR6TFQV939VSSheGG8CLnUFWWjIwGyi/4L7eqJIvxV8DJCdK3uEY0X1jpe1EjuGyE307OaB3vzq/oKg86DKm5Ce1oaBOpHKaZ+3rkyY56I67HEaT86+5WGal/02AoGDLVslAprJ4+Fo6hSbHXbwmrNbgu2n40CGxdXPIHTiDkwqYovlLrXnEtmSXicOTthSEfT9fy2LeLCFTLS3Ka4S3U4Rdd7qwAiVUFaBAaEkdpPt4S6OZxCTOFKESyADZcllJEHo/tNDuz5fFnZO+rfGOPSuLbnVmkVDlOZetm9+IaSRZZBhTjKklt9JX049wpejQCg6A5Hbk7ZTirAfz/+TFWm8hLYuKy89qGqWOb4gRCQma5CPRHJF9TryZuvr5K1V8ME0xIhD/B2q26IS2zblr2YlGwZOfIRkUmQWl6NVVgKInc7CPOQE/TrV6Woz4g7sJZkc=) 2025-06-02 19:35:19.298912 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAvQKe0TXFsJEAW7TQgdsIpqqQ7oLGhEZAtJ6mRRE3fWCfe4mtydqP/hQOFYIp7HyGh67YLub/XTdJMtuDv2wWI=) 2025-06-02 19:35:19.300281 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG+pov6gC9fwlur0So0w9scAaCytqboiltSLIXqAASVm) 2025-06-02 19:35:19.301126 | orchestrator | 2025-06-02 19:35:19.301482 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:19.302180 | orchestrator | Monday 02 June 2025 19:35:19 +0000 (0:00:01.084) 0:00:19.532 *********** 2025-06-02 19:35:20.351094 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiz1QCzRs91nSw1leWxEnWyCRjn9crhBtXyvjaJNChFjdMNTmtftmBl5FkF0rcDsDnrSoxKf7hLX0RM7Gdr9/2YC2DS3AWTtDVxRB70KOy97zh+oFUKlFxFgLmj6hWrz/qvP5Nr1oV0PIDPx5wSiO+E78lwCTYJvRhEoXPb7IvuCwTESrHpC/jRJjNrsCIBbRF3sRyN+jq2xKeitXUyQgrA8jRpPHAXst2d4fo4GK6oHIi+r0k1neuTmg54ex+HNfP1cRL4B0/sxLpXYeM8ha08QQBiVbYxjCgs3MRbFPS2ZXTc7yx8JX9PsmlpudbkBqTRUm1i2tZK+PGqWiHSvnqcj7A4wdw1jlY6RANQuXXACzvsQSitiF3B7RrRFaaY1Rauqqap8Sr3kCs3AGNf6a/t/jQc3CVeySFw9kHOLCFLi9y/AWRw+HI3ZyGfuaBrjx59tDHZMTFo6pSIGaNP4ZgJBGbn65mILzQyUo9AUk6Tbd84FDEocMosBqRfK3GtxE=) 2025-06-02 19:35:20.351845 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPrYz82Q1zd833od2EsJztcnvrcgcI5VR4I5sQLBx1dGwv5fSHl5Ji9FFR3ee+RDBlUAaumySum75AQnlJMwCSA=) 2025-06-02 19:35:20.352770 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIER7Pl1SJ3yMxdl/V2I/B6huxKr71gPMTF3LPeuHiLKP) 2025-06-02 19:35:20.353401 | orchestrator | 2025-06-02 19:35:20.354297 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:20.355347 | orchestrator | Monday 02 June 2025 19:35:20 +0000 (0:00:01.052) 0:00:20.585 *********** 2025-06-02 19:35:21.395487 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKXaMbuQlx8Ul4IXg9DKyzY4zjS0Usz1gp9YCm4iY66P) 2025-06-02 19:35:21.395751 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCMr6Bsg1aW0Uwgv4YAd7lSB0YTj8VnipfMv4nFb07rXAXXmkBI5G+iEsPnAs0fNuQ1DG5BA1akTAcYyaq1KSFQr6mDVhWbVH6f2TDuQ3YJkezrYUH4yK/5QNxwU9Y4Jk2r+MtXTZ72lQjARAsxbQaeVNNv/WoYyJaZ3Nq6yHVaHmlNnaB5RrAkkJzR/e1C4yaCK1mOF/z08bQPbBGkQXLNxoyL4pI64+1mmRsIoHmfU89tdAhx2lyJZ0Pl+YYAmgFxleb55Ar0H0dxthQN1N6obc4H5OQg2tf3pSEeWymx9KimcvpDYjyJnTFNycD/m5KbGrFsExC0oinmDGT70pVXal2rjyJ5uc7SVjCwYMi80Hd4RS58k++yUPnTDME9y+dFrYxA7XyGDbXjfYSbtvMoXvQk/9FUAs4k3utuvqyDGaiXwMrrq8/evh08blNTqFFfain4aZ8WFD1oWkSYIcFNjzJ3qPIAeJxq5vHrC7ko7bRLhNC4Jy5YeCq1ufOy+d8=) 2025-06-02 19:35:21.396485 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAxeGlv86uocikYcVtLSacgN+/WRkPDi7v+V0YByL7OBX5chddeA5KhXUueaxPxQ+om+cfZZFoWs6sM8HpVOcSs=) 2025-06-02 19:35:21.397115 | orchestrator | 2025-06-02 19:35:21.398074 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:21.398714 | orchestrator | Monday 02 June 2025 19:35:21 +0000 (0:00:01.043) 0:00:21.628 *********** 2025-06-02 19:35:22.466960 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOQaLxs72+xZ71wD5JnrN1hWyAWTJcD9xtHjcx9uZ+BU9vXulBLqPRQucbGmBgqFa4XOP5tScPM5qyCP/xeTCyc=) 2025-06-02 19:35:22.469043 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCe13EMSGWt4x2Pfp3DNtA+FIyRAKSfxHPz99SKhlDaz9jXHuDL4rTEc6nIK8D9IdnrGCW/fcm813M1QXAS2SyjZh7CEnFuBNh1+H1e4Cbx+WfCCRYkmTyXM8MLjfcMUD0AvXpjbbcbUFL1b8JxH4fT70qgv4e/BypkpJAdD157Gf8PGrM5gpNAjnz1y1NFQ0sMi74CyHef+tKoBVsf94NQB3tSRyUA0mO/xISqFpUk3QnJlWnM3fff7q24Q5YlXy/I63MwPKVxMD78CEXZD/fhw/b95eJYTWvQ0y0HJVDYund8nb2d6cUyR3f+hdK+KBAG30Y6Dk88o/CxLcyqM+p0T88vIM42XltF7mBaeCloxMxFqaZigASsPoDZfleZwNncNy0FTQnYPveuNFgNwJHdzZILZGwYK6mPnuT3Qz9oAaBVn3/rXX6J4FWXYubsbmJy7nY9uZ4cUhuRU9SA7ZUA4c6Lc2fr7uerC5OOa0mM4Ryn7X2npqoajrG/In+FSxE=) 2025-06-02 19:35:22.469817 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKhaSEXSocfKFxBKEqJQpwX/zgZUxRRT0p5/0dqDrKpi) 2025-06-02 19:35:22.470148 | orchestrator | 2025-06-02 19:35:22.470963 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:22.471634 | orchestrator | Monday 02 June 2025 19:35:22 +0000 (0:00:01.072) 0:00:22.700 *********** 2025-06-02 19:35:23.526848 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI5bg1sgFVENQNPERWupn3DlJZ0kuBA1jL20F8nfi26mcXZQlfJ7xh0HsyeAjdC4BCTFeal1a2SlUKTrZ2TfFgU=) 2025-06-02 19:35:23.527019 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFwwMXm8htCLX654UTjyJkuhzcLxy2NZRnE/1Nd/Qi93) 2025-06-02 19:35:23.528288 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFQrsTnxtHfCaI3FBps8JGrG3pZF0onIrWvZn9AYmN60cGijb2qqdVKvslnop22PViJlLiSHF0RvfMWhyH/RnYl29sngoRL3p0RuJmdhNnBQfIML2sSTJc/i3FDhTcfuFOmXLmUBdQmRhJP8qSmU67eia0tL5mUczEGWfOs0V1ue2rLTYZ/JRaAO9RgM+VuTzdIAuy2RG4RFGXErm4HsyjyrwpwQ/PkQaOC4bQw5XkppZ89mlKYnmf7FHp9ajmafOhkzRUETscXWwH+EE3OgNu/vJQ97lEt734f4JVZTzn22K8CFo9wCFg6CA0Eb28YhnSwbPUUinsxci4eCQR2k5IxmywPm/agza0SaDoFsJ4Q8wzV8cAWOZuNG3oFKR1BNpMJ5YE5qXbDFT4Se2+6SvXFJ98W6imoYxIXAyEVIqf1xAC3XNq9dJlneP1bU/LZF5UaHpxxRWkxLqKTF0vs/LpOHB0VjoPg++lPqgG9Il132SDNYRTasI53Vi1aCAQSME=) 2025-06-02 19:35:23.529553 | orchestrator | 2025-06-02 19:35:23.529747 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:23.530299 | orchestrator | Monday 02 June 2025 19:35:23 +0000 (0:00:01.060) 0:00:23.761 *********** 2025-06-02 19:35:24.569013 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1KJyzuoUlwh1uIDiHze9VOEzDjqAw83DdAP4XMNCYji6VaCT2Bds2wMGtZMoflZ0xXfF2e4c6QApKT9AdKGZooqhTxIesIThkcMpVPQzSPYZ4Zis30qLQD1JnaTTONyE8Go/WgFbN8pOiJm+HN889bg3LF2MMUrJ6sICqruj1vusq1Tct4w8Zi8LmRW2C3OGfh4rWNtj2RuNKPayrQIDFTWGuzrcIhy6TIqxJN54ofFItxgm+E67r/YmYNi6cIc40nlq3Plz9uJvuyNY88YkEE/QD2DUjp/BDprvc5aEqwtueF6Sxd8PYXtr8a6B2oOZ048sdqA19PnM03BhCzNj7oB+qsQnYadfoHmP0LW+FL3tzg++KMP0eMVi6yjQKfFTaf0/pf5tEHlcfuR58/Io5/2AooTZGdtM7aI9XO8U9BQkHvk98dklyEtNoexF5Y4c9WDPi2AaM189g2DHOLpbK2liExLH0gDKHL88loJVr/dKaAhvbt8IBXqo7XOJHS4M=) 2025-06-02 19:35:24.569126 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKbMcCuBL4vn24D6h58dRcrailLpMpCeV1XYN0ky9C07JPAmNJiMVrfau+DB6zXIcDuWPx8TvhmEeX0GelOYs08=) 2025-06-02 19:35:24.569220 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG5iHKbszpk+zlkHSozwxhoB1MXFq+nsP0iO+GsQuwsB) 2025-06-02 19:35:24.569961 | orchestrator | 2025-06-02 19:35:24.570703 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 19:35:24.571276 | orchestrator | Monday 02 June 2025 19:35:24 +0000 (0:00:01.039) 0:00:24.801 *********** 2025-06-02 19:35:25.609838 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcPkA7qc/8EDmRtj6Ks71XJ2mdYdl0UG549q82e5lV60mbz0GI8eoaQ4TRSAW707li9HxouiWjLMTruEIHpLGIIsWVsxy6LNGs5D0l/m+h0VntlgjlAL7rAZKPwi54/9XqLro70LJFn2SnxCe6DZMrdsm6XPRXpcl9kDNRY3rLvh02W6CTIHOkKn7DIIf5ZYYWIXFqFosOa6ccPgrtYriwg+Tpaky/z0PNgcRvAHX4rBxgaAnamd3sgMHy4M4iO+aQu2XlR1+RUM6rmggVCO7uQelFS3XEPEbChWUuv+7lWIIO3T0HdoFwRfYfoD/lT/xF3sRCQXb75lbN+DfbvI+82gRYZPcie1NKLSPexj520Qe1DWD3FOLADWYS+Q18CMLrybYU3Sjp6/IPzpwIEnBuROEtwr8FYxB6Vc643PuLxvbqqQLzG83KOeSHyoCL7s1F8lxsQINCW0VfYn9zUON8tvgkV6K6lnnK8FV/luIykYxRX6dxIECGy1QHVb/rjCk=) 2025-06-02 19:35:25.610291 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDgojQBoN/X5QoWlNYHeBh1XT2BglYoVnjHJaqbN6Kaepv7C+GtuCixl3U6vuZN2CA67dgd/RHk90yxDDCLW9ic=) 2025-06-02 19:35:25.611205 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAIyH6EDpDD984y1dgjdeABnu8FD2CoElcf0+XnvNHbd) 2025-06-02 19:35:25.612060 | orchestrator | 2025-06-02 19:35:25.613116 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-02 19:35:25.613849 | orchestrator | Monday 02 June 2025 19:35:25 +0000 (0:00:01.042) 0:00:25.844 *********** 2025-06-02 19:35:25.772129 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 19:35:25.772507 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 19:35:25.773457 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 19:35:25.775282 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 19:35:25.775661 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 19:35:25.776718 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 19:35:25.777469 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 19:35:25.778192 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:35:25.778377 | orchestrator | 2025-06-02 19:35:25.779014 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-02 19:35:25.779467 | orchestrator | Monday 02 June 2025 19:35:25 +0000 (0:00:00.164) 0:00:26.008 *********** 2025-06-02 19:35:25.826191 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:35:25.826277 | orchestrator | 2025-06-02 19:35:25.826744 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-02 19:35:25.827490 | orchestrator | Monday 02 June 2025 19:35:25 +0000 (0:00:00.054) 0:00:26.062 *********** 2025-06-02 19:35:25.872277 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:35:25.872568 | orchestrator | 2025-06-02 19:35:25.873363 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-02 19:35:25.874544 | orchestrator | Monday 02 June 2025 19:35:25 +0000 (0:00:00.046) 0:00:26.109 *********** 2025-06-02 19:35:26.388318 | orchestrator | changed: [testbed-manager] 2025-06-02 19:35:26.388484 | orchestrator | 2025-06-02 19:35:26.388838 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:35:26.389142 | orchestrator | 2025-06-02 19:35:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:35:26.389167 | orchestrator | 2025-06-02 19:35:26 | INFO  | Please wait and do not abort execution. 2025-06-02 19:35:26.389540 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 19:35:26.390156 | orchestrator | 2025-06-02 19:35:26.390477 | orchestrator | 2025-06-02 19:35:26.391129 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:35:26.392015 | orchestrator | Monday 02 June 2025 19:35:26 +0000 (0:00:00.514) 0:00:26.624 *********** 2025-06-02 19:35:26.392088 | orchestrator | =============================================================================== 2025-06-02 19:35:26.392638 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.64s 2025-06-02 19:35:26.392996 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.26s 2025-06-02 19:35:26.393363 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-02 19:35:26.393878 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-02 19:35:26.394267 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-02 19:35:26.394838 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-02 19:35:26.395374 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-02 19:35:26.395788 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-02 19:35:26.396112 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-02 19:35:26.396579 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-02 19:35:26.397233 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-02 19:35:26.397551 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-02 19:35:26.397797 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-02 19:35:26.398408 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-06-02 19:35:26.398506 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-06-02 19:35:26.398857 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2025-06-02 19:35:26.399326 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.52s 2025-06-02 19:35:26.399782 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-06-02 19:35:26.400058 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-06-02 19:35:26.400507 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2025-06-02 19:35:26.880196 | orchestrator | + osism apply squid 2025-06-02 19:35:28.529940 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:35:28.530070 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:35:28.530084 | orchestrator | Registering Redlock._release_script 2025-06-02 19:35:28.587547 | orchestrator | 2025-06-02 19:35:28 | INFO  | Task e6d14b7f-0728-4d4d-b66b-f50af0e588ee (squid) was prepared for execution. 2025-06-02 19:35:28.587635 | orchestrator | 2025-06-02 19:35:28 | INFO  | It takes a moment until task e6d14b7f-0728-4d4d-b66b-f50af0e588ee (squid) has been started and output is visible here. 2025-06-02 19:35:32.569156 | orchestrator | 2025-06-02 19:35:32.570299 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-02 19:35:32.572120 | orchestrator | 2025-06-02 19:35:32.573118 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-02 19:35:32.573807 | orchestrator | Monday 02 June 2025 19:35:32 +0000 (0:00:00.167) 0:00:00.167 *********** 2025-06-02 19:35:32.649998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 19:35:32.650256 | orchestrator | 2025-06-02 19:35:32.651535 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-02 19:35:32.652734 | orchestrator | Monday 02 June 2025 19:35:32 +0000 (0:00:00.080) 0:00:00.248 *********** 2025-06-02 19:35:34.097562 | orchestrator | ok: [testbed-manager] 2025-06-02 19:35:34.097790 | orchestrator | 2025-06-02 19:35:34.098279 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-02 19:35:34.100031 | orchestrator | Monday 02 June 2025 19:35:34 +0000 (0:00:01.446) 0:00:01.694 *********** 2025-06-02 19:35:35.258869 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-02 19:35:35.258974 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-02 19:35:35.258991 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-02 19:35:35.259004 | orchestrator | 2025-06-02 19:35:35.259083 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-02 19:35:35.259172 | orchestrator | Monday 02 June 2025 19:35:35 +0000 (0:00:01.159) 0:00:02.854 *********** 2025-06-02 19:35:36.160050 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-02 19:35:36.160153 | orchestrator | 2025-06-02 19:35:36.160168 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-02 19:35:36.160620 | orchestrator | Monday 02 June 2025 19:35:36 +0000 (0:00:00.903) 0:00:03.757 *********** 2025-06-02 19:35:36.467363 | orchestrator | ok: [testbed-manager] 2025-06-02 19:35:36.467967 | orchestrator | 2025-06-02 19:35:36.468390 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-02 19:35:36.468974 | orchestrator | Monday 02 June 2025 19:35:36 +0000 (0:00:00.310) 0:00:04.068 *********** 2025-06-02 19:35:37.293693 | orchestrator | changed: [testbed-manager] 2025-06-02 19:35:37.296021 | orchestrator | 2025-06-02 19:35:37.296826 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-02 19:35:37.297601 | orchestrator | Monday 02 June 2025 19:35:37 +0000 (0:00:00.825) 0:00:04.893 *********** 2025-06-02 19:36:09.023136 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-02 19:36:09.023255 | orchestrator | ok: [testbed-manager] 2025-06-02 19:36:09.023272 | orchestrator | 2025-06-02 19:36:09.023471 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-02 19:36:09.024440 | orchestrator | Monday 02 June 2025 19:36:09 +0000 (0:00:31.721) 0:00:36.615 *********** 2025-06-02 19:36:21.489662 | orchestrator | changed: [testbed-manager] 2025-06-02 19:36:21.489831 | orchestrator | 2025-06-02 19:36:21.489849 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-02 19:36:21.489863 | orchestrator | Monday 02 June 2025 19:36:21 +0000 (0:00:12.470) 0:00:49.086 *********** 2025-06-02 19:37:21.566106 | orchestrator | Pausing for 60 seconds 2025-06-02 19:37:21.566230 | orchestrator | changed: [testbed-manager] 2025-06-02 19:37:21.566248 | orchestrator | 2025-06-02 19:37:21.566261 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-02 19:37:21.566274 | orchestrator | Monday 02 June 2025 19:37:21 +0000 (0:01:00.074) 0:01:49.160 *********** 2025-06-02 19:37:21.632606 | orchestrator | ok: [testbed-manager] 2025-06-02 19:37:21.633176 | orchestrator | 2025-06-02 19:37:21.633838 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-02 19:37:21.634547 | orchestrator | Monday 02 June 2025 19:37:21 +0000 (0:00:00.071) 0:01:49.232 *********** 2025-06-02 19:37:22.210820 | orchestrator | changed: [testbed-manager] 2025-06-02 19:37:22.211821 | orchestrator | 2025-06-02 19:37:22.212809 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:37:22.212851 | orchestrator | 2025-06-02 19:37:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:37:22.212865 | orchestrator | 2025-06-02 19:37:22 | INFO  | Please wait and do not abort execution. 2025-06-02 19:37:22.213449 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:37:22.214360 | orchestrator | 2025-06-02 19:37:22.214733 | orchestrator | 2025-06-02 19:37:22.215273 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:37:22.215716 | orchestrator | Monday 02 June 2025 19:37:22 +0000 (0:00:00.576) 0:01:49.809 *********** 2025-06-02 19:37:22.216135 | orchestrator | =============================================================================== 2025-06-02 19:37:22.216451 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-06-02 19:37:22.217082 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.72s 2025-06-02 19:37:22.217329 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.47s 2025-06-02 19:37:22.218619 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.45s 2025-06-02 19:37:22.218914 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.16s 2025-06-02 19:37:22.219227 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.90s 2025-06-02 19:37:22.219933 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.83s 2025-06-02 19:37:22.220308 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.58s 2025-06-02 19:37:22.220747 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2025-06-02 19:37:22.221097 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-06-02 19:37:22.221479 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-06-02 19:37:22.691862 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 19:37:22.691956 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-06-02 19:37:22.697837 | orchestrator | ++ semver 9.1.0 9.0.0 2025-06-02 19:37:22.765644 | orchestrator | + [[ 1 -lt 0 ]] 2025-06-02 19:37:22.766666 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-02 19:37:24.375278 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:37:24.375400 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:37:24.375415 | orchestrator | Registering Redlock._release_script 2025-06-02 19:37:24.431251 | orchestrator | 2025-06-02 19:37:24 | INFO  | Task 89b112d8-117c-4732-a574-5ad8a5390b05 (operator) was prepared for execution. 2025-06-02 19:37:24.431347 | orchestrator | 2025-06-02 19:37:24 | INFO  | It takes a moment until task 89b112d8-117c-4732-a574-5ad8a5390b05 (operator) has been started and output is visible here. 2025-06-02 19:37:28.360164 | orchestrator | 2025-06-02 19:37:28.362839 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-02 19:37:28.362915 | orchestrator | 2025-06-02 19:37:28.364064 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 19:37:28.364898 | orchestrator | Monday 02 June 2025 19:37:28 +0000 (0:00:00.146) 0:00:00.146 *********** 2025-06-02 19:37:31.507384 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:37:31.507551 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:37:31.508252 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:37:31.509237 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:37:31.509516 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:37:31.510359 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:37:31.510965 | orchestrator | 2025-06-02 19:37:31.512460 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-02 19:37:31.512926 | orchestrator | Monday 02 June 2025 19:37:31 +0000 (0:00:03.150) 0:00:03.297 *********** 2025-06-02 19:37:32.248088 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:37:32.251109 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:37:32.251140 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:37:32.251152 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:37:32.251164 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:37:32.253584 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:37:32.253606 | orchestrator | 2025-06-02 19:37:32.253620 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-02 19:37:32.253633 | orchestrator | 2025-06-02 19:37:32.254080 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 19:37:32.254627 | orchestrator | Monday 02 June 2025 19:37:32 +0000 (0:00:00.740) 0:00:04.037 *********** 2025-06-02 19:37:32.316231 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:37:32.336755 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:37:32.362592 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:37:32.413273 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:37:32.415233 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:37:32.416623 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:37:32.417385 | orchestrator | 2025-06-02 19:37:32.417707 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 19:37:32.418428 | orchestrator | Monday 02 June 2025 19:37:32 +0000 (0:00:00.164) 0:00:04.202 *********** 2025-06-02 19:37:32.487583 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:37:32.506903 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:37:32.582779 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:37:32.584087 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:37:32.584117 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:37:32.584130 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:37:32.584350 | orchestrator | 2025-06-02 19:37:32.586105 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 19:37:32.586149 | orchestrator | Monday 02 June 2025 19:37:32 +0000 (0:00:00.169) 0:00:04.372 *********** 2025-06-02 19:37:33.186724 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:37:33.187126 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:37:33.188266 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:37:33.190355 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:37:33.190895 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:37:33.192953 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:37:33.194881 | orchestrator | 2025-06-02 19:37:33.196452 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 19:37:33.197094 | orchestrator | Monday 02 June 2025 19:37:33 +0000 (0:00:00.604) 0:00:04.976 *********** 2025-06-02 19:37:34.046780 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:37:34.046940 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:37:34.047746 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:37:34.048780 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:37:34.049535 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:37:34.050593 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:37:34.051325 | orchestrator | 2025-06-02 19:37:34.052791 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 19:37:34.053581 | orchestrator | Monday 02 June 2025 19:37:34 +0000 (0:00:00.856) 0:00:05.833 *********** 2025-06-02 19:37:35.253084 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-02 19:37:35.254464 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-02 19:37:35.254515 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-02 19:37:35.254575 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-02 19:37:35.256212 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-02 19:37:35.257302 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-02 19:37:35.258322 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-02 19:37:35.259008 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-02 19:37:35.261249 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-02 19:37:35.262153 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-02 19:37:35.262999 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-02 19:37:35.264189 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-02 19:37:35.265146 | orchestrator | 2025-06-02 19:37:35.266197 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 19:37:35.267106 | orchestrator | Monday 02 June 2025 19:37:35 +0000 (0:00:01.207) 0:00:07.040 *********** 2025-06-02 19:37:36.479282 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:37:36.479387 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:37:36.479403 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:37:36.480718 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:37:36.481747 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:37:36.482654 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:37:36.483609 | orchestrator | 2025-06-02 19:37:36.484249 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 19:37:36.485531 | orchestrator | Monday 02 June 2025 19:37:36 +0000 (0:00:01.224) 0:00:08.265 *********** 2025-06-02 19:37:37.629893 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-02 19:37:37.630229 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-02 19:37:37.631867 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-02 19:37:37.718275 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:37:37.718993 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:37:37.719657 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:37:37.720723 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:37:37.721871 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:37:37.722568 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 19:37:37.723461 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-02 19:37:37.724319 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-02 19:37:37.725097 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-02 19:37:37.725868 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-02 19:37:37.726704 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-02 19:37:37.727251 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-02 19:37:37.728325 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:37:37.729581 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:37:37.730335 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:37:37.730951 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:37:37.731904 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:37:37.732061 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-02 19:37:37.732659 | orchestrator | 2025-06-02 19:37:37.733278 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 19:37:37.733927 | orchestrator | Monday 02 June 2025 19:37:37 +0000 (0:00:01.242) 0:00:09.508 *********** 2025-06-02 19:37:38.278351 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:37:38.278458 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:37:38.278985 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:37:38.279255 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:37:38.279844 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:37:38.280546 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:37:38.281177 | orchestrator | 2025-06-02 19:37:38.281745 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 19:37:38.282521 | orchestrator | Monday 02 June 2025 19:37:38 +0000 (0:00:00.560) 0:00:10.068 *********** 2025-06-02 19:37:38.365115 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:37:38.391725 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:37:38.435010 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:37:38.435303 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:37:38.436058 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:37:38.437345 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:37:38.437924 | orchestrator | 2025-06-02 19:37:38.438616 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 19:37:38.439277 | orchestrator | Monday 02 June 2025 19:37:38 +0000 (0:00:00.157) 0:00:10.225 *********** 2025-06-02 19:37:39.135425 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 19:37:39.135535 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:37:39.136216 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 19:37:39.138114 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 19:37:39.138757 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 19:37:39.139761 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:37:39.140299 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:37:39.141492 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 19:37:39.141966 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:37:39.142375 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:37:39.142778 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 19:37:39.143619 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:37:39.144262 | orchestrator | 2025-06-02 19:37:39.144505 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 19:37:39.145245 | orchestrator | Monday 02 June 2025 19:37:39 +0000 (0:00:00.697) 0:00:10.923 *********** 2025-06-02 19:37:39.179814 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:37:39.200482 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:37:39.244311 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:37:39.271987 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:37:39.272232 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:37:39.273589 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:37:39.274386 | orchestrator | 2025-06-02 19:37:39.275389 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 19:37:39.275990 | orchestrator | Monday 02 June 2025 19:37:39 +0000 (0:00:00.138) 0:00:11.062 *********** 2025-06-02 19:37:39.320278 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:37:39.343578 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:37:39.371876 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:37:39.392585 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:37:39.429027 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:37:39.429184 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:37:39.432719 | orchestrator | 2025-06-02 19:37:39.433599 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 19:37:39.434779 | orchestrator | Monday 02 June 2025 19:37:39 +0000 (0:00:00.156) 0:00:11.218 *********** 2025-06-02 19:37:39.502598 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:37:39.520957 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:37:39.542274 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:37:39.568751 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:37:39.568920 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:37:39.570820 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:37:39.571145 | orchestrator | 2025-06-02 19:37:39.571917 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 19:37:39.572517 | orchestrator | Monday 02 June 2025 19:37:39 +0000 (0:00:00.141) 0:00:11.359 *********** 2025-06-02 19:37:40.199936 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:37:40.200038 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:37:40.201456 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:37:40.202800 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:37:40.203180 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:37:40.204086 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:37:40.204204 | orchestrator | 2025-06-02 19:37:40.204988 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 19:37:40.205204 | orchestrator | Monday 02 June 2025 19:37:40 +0000 (0:00:00.626) 0:00:11.986 *********** 2025-06-02 19:37:40.286193 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:37:40.316453 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:37:40.338595 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:37:40.440775 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:37:40.441298 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:37:40.442356 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:37:40.444043 | orchestrator | 2025-06-02 19:37:40.445073 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:37:40.445288 | orchestrator | 2025-06-02 19:37:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:37:40.445433 | orchestrator | 2025-06-02 19:37:40 | INFO  | Please wait and do not abort execution. 2025-06-02 19:37:40.446666 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:37:40.447571 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:37:40.448256 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:37:40.449068 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:37:40.450739 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:37:40.451327 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:37:40.451987 | orchestrator | 2025-06-02 19:37:40.452518 | orchestrator | 2025-06-02 19:37:40.453017 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:37:40.453522 | orchestrator | Monday 02 June 2025 19:37:40 +0000 (0:00:00.242) 0:00:12.229 *********** 2025-06-02 19:37:40.453920 | orchestrator | =============================================================================== 2025-06-02 19:37:40.454361 | orchestrator | Gathering Facts --------------------------------------------------------- 3.15s 2025-06-02 19:37:40.454877 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.24s 2025-06-02 19:37:40.455315 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.23s 2025-06-02 19:37:40.455659 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-06-02 19:37:40.456082 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2025-06-02 19:37:40.456464 | orchestrator | Do not require tty for all users ---------------------------------------- 0.74s 2025-06-02 19:37:40.456878 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2025-06-02 19:37:40.457202 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-06-02 19:37:40.457843 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-06-02 19:37:40.458170 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-06-02 19:37:40.458609 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-06-02 19:37:40.459013 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-06-02 19:37:40.459389 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-06-02 19:37:40.459936 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-06-02 19:37:40.460162 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-06-02 19:37:40.460591 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-06-02 19:37:40.460967 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-06-02 19:37:40.880036 | orchestrator | + osism apply --environment custom facts 2025-06-02 19:37:42.515758 | orchestrator | 2025-06-02 19:37:42 | INFO  | Trying to run play facts in environment custom 2025-06-02 19:37:42.520731 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:37:42.520769 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:37:42.520782 | orchestrator | Registering Redlock._release_script 2025-06-02 19:37:42.580537 | orchestrator | 2025-06-02 19:37:42 | INFO  | Task 91d4bf83-e0a0-4b3c-ac71-551ac18c189f (facts) was prepared for execution. 2025-06-02 19:37:42.580601 | orchestrator | 2025-06-02 19:37:42 | INFO  | It takes a moment until task 91d4bf83-e0a0-4b3c-ac71-551ac18c189f (facts) has been started and output is visible here. 2025-06-02 19:37:46.379322 | orchestrator | 2025-06-02 19:37:46.382934 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-02 19:37:46.382986 | orchestrator | 2025-06-02 19:37:46.383007 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 19:37:46.384188 | orchestrator | Monday 02 June 2025 19:37:46 +0000 (0:00:00.086) 0:00:00.086 *********** 2025-06-02 19:37:47.708802 | orchestrator | ok: [testbed-manager] 2025-06-02 19:37:47.708962 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:37:47.709054 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:37:47.710691 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:37:47.712054 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:37:47.713703 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:37:47.715251 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:37:47.716142 | orchestrator | 2025-06-02 19:37:47.717235 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-02 19:37:47.719079 | orchestrator | Monday 02 June 2025 19:37:47 +0000 (0:00:01.329) 0:00:01.416 *********** 2025-06-02 19:37:48.868610 | orchestrator | ok: [testbed-manager] 2025-06-02 19:37:48.869025 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:37:48.869925 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:37:48.870996 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:37:48.871732 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:37:48.872543 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:37:48.872954 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:37:48.873640 | orchestrator | 2025-06-02 19:37:48.874129 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-02 19:37:48.874785 | orchestrator | 2025-06-02 19:37:48.875539 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 19:37:48.875946 | orchestrator | Monday 02 June 2025 19:37:48 +0000 (0:00:01.162) 0:00:02.578 *********** 2025-06-02 19:37:48.972381 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:37:48.972476 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:37:48.973073 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:37:48.973962 | orchestrator | 2025-06-02 19:37:48.974794 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 19:37:48.975283 | orchestrator | Monday 02 June 2025 19:37:48 +0000 (0:00:00.105) 0:00:02.683 *********** 2025-06-02 19:37:49.169227 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:37:49.169657 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:37:49.170905 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:37:49.171471 | orchestrator | 2025-06-02 19:37:49.172239 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 19:37:49.173222 | orchestrator | Monday 02 June 2025 19:37:49 +0000 (0:00:00.198) 0:00:02.881 *********** 2025-06-02 19:37:49.351583 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:37:49.351889 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:37:49.351980 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:37:49.352291 | orchestrator | 2025-06-02 19:37:49.352568 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 19:37:49.353300 | orchestrator | Monday 02 June 2025 19:37:49 +0000 (0:00:00.182) 0:00:03.064 *********** 2025-06-02 19:37:49.521251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:37:49.521817 | orchestrator | 2025-06-02 19:37:49.522856 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 19:37:49.524040 | orchestrator | Monday 02 June 2025 19:37:49 +0000 (0:00:00.166) 0:00:03.230 *********** 2025-06-02 19:37:49.967717 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:37:49.967816 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:37:49.968498 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:37:49.969851 | orchestrator | 2025-06-02 19:37:49.970845 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 19:37:49.971598 | orchestrator | Monday 02 June 2025 19:37:49 +0000 (0:00:00.447) 0:00:03.677 *********** 2025-06-02 19:37:50.065269 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:37:50.065937 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:37:50.066574 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:37:50.067459 | orchestrator | 2025-06-02 19:37:50.068391 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 19:37:50.069213 | orchestrator | Monday 02 June 2025 19:37:50 +0000 (0:00:00.099) 0:00:03.777 *********** 2025-06-02 19:37:51.132030 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:37:51.132644 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:37:51.136580 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:37:51.137208 | orchestrator | 2025-06-02 19:37:51.137934 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 19:37:51.138360 | orchestrator | Monday 02 June 2025 19:37:51 +0000 (0:00:01.065) 0:00:04.842 *********** 2025-06-02 19:37:51.591893 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:37:51.592709 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:37:51.593868 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:37:51.594831 | orchestrator | 2025-06-02 19:37:51.595789 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 19:37:51.596769 | orchestrator | Monday 02 June 2025 19:37:51 +0000 (0:00:00.459) 0:00:05.302 *********** 2025-06-02 19:37:52.609903 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:37:52.610353 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:37:52.611163 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:37:52.612099 | orchestrator | 2025-06-02 19:37:52.612906 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 19:37:52.613704 | orchestrator | Monday 02 June 2025 19:37:52 +0000 (0:00:01.016) 0:00:06.319 *********** 2025-06-02 19:38:06.259307 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:38:06.259443 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:38:06.259538 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:38:06.260617 | orchestrator | 2025-06-02 19:38:06.262435 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-02 19:38:06.263371 | orchestrator | Monday 02 June 2025 19:38:06 +0000 (0:00:13.648) 0:00:19.968 *********** 2025-06-02 19:38:06.368557 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:38:06.368911 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:38:06.369999 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:38:06.370375 | orchestrator | 2025-06-02 19:38:06.371168 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-02 19:38:06.371738 | orchestrator | Monday 02 June 2025 19:38:06 +0000 (0:00:00.112) 0:00:20.080 *********** 2025-06-02 19:38:13.468258 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:38:13.469042 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:38:13.470512 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:38:13.471906 | orchestrator | 2025-06-02 19:38:13.473201 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 19:38:13.473308 | orchestrator | Monday 02 June 2025 19:38:13 +0000 (0:00:07.096) 0:00:27.177 *********** 2025-06-02 19:38:13.899314 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:13.899377 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:13.900396 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:13.901164 | orchestrator | 2025-06-02 19:38:13.902058 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 19:38:13.903098 | orchestrator | Monday 02 June 2025 19:38:13 +0000 (0:00:00.432) 0:00:27.609 *********** 2025-06-02 19:38:17.346322 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-02 19:38:17.346428 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-02 19:38:17.347950 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-02 19:38:17.349507 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-02 19:38:17.350350 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-02 19:38:17.353456 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-02 19:38:17.353905 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-02 19:38:17.354435 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-02 19:38:17.355064 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-02 19:38:17.355587 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-02 19:38:17.357414 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-02 19:38:17.357622 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-02 19:38:17.358106 | orchestrator | 2025-06-02 19:38:17.358599 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 19:38:17.359044 | orchestrator | Monday 02 June 2025 19:38:17 +0000 (0:00:03.445) 0:00:31.055 *********** 2025-06-02 19:38:18.594923 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:18.595092 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:18.595381 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:18.598863 | orchestrator | 2025-06-02 19:38:18.598997 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 19:38:18.599581 | orchestrator | 2025-06-02 19:38:18.600033 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:38:18.600331 | orchestrator | Monday 02 June 2025 19:38:18 +0000 (0:00:01.249) 0:00:32.305 *********** 2025-06-02 19:38:22.384257 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:22.384367 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:22.384445 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:22.384858 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:22.385171 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:22.385856 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:22.386269 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:22.386745 | orchestrator | 2025-06-02 19:38:22.387228 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:38:22.388062 | orchestrator | 2025-06-02 19:38:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:38:22.388091 | orchestrator | 2025-06-02 19:38:22 | INFO  | Please wait and do not abort execution. 2025-06-02 19:38:22.388264 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:38:22.388757 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:38:22.389218 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:38:22.389521 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:38:22.389945 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:38:22.390385 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:38:22.390562 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:38:22.390894 | orchestrator | 2025-06-02 19:38:22.391317 | orchestrator | 2025-06-02 19:38:22.391758 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:38:22.392102 | orchestrator | Monday 02 June 2025 19:38:22 +0000 (0:00:03.790) 0:00:36.096 *********** 2025-06-02 19:38:22.392476 | orchestrator | =============================================================================== 2025-06-02 19:38:22.392792 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.65s 2025-06-02 19:38:22.393164 | orchestrator | Install required packages (Debian) -------------------------------------- 7.10s 2025-06-02 19:38:22.393556 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.79s 2025-06-02 19:38:22.393802 | orchestrator | Copy fact files --------------------------------------------------------- 3.45s 2025-06-02 19:38:22.394198 | orchestrator | Create custom facts directory ------------------------------------------- 1.33s 2025-06-02 19:38:22.394648 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.25s 2025-06-02 19:38:22.394792 | orchestrator | Copy fact file ---------------------------------------------------------- 1.16s 2025-06-02 19:38:22.397509 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2025-06-02 19:38:22.398740 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2025-06-02 19:38:22.399350 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-06-02 19:38:22.400392 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-06-02 19:38:22.400879 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-06-02 19:38:22.401479 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-06-02 19:38:22.402336 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2025-06-02 19:38:22.403070 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2025-06-02 19:38:22.403846 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-06-02 19:38:22.404181 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-06-02 19:38:22.404741 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-06-02 19:38:22.840597 | orchestrator | + osism apply bootstrap 2025-06-02 19:38:24.467187 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:38:24.467316 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:38:24.467333 | orchestrator | Registering Redlock._release_script 2025-06-02 19:38:24.532510 | orchestrator | 2025-06-02 19:38:24 | INFO  | Task 2de1324e-b6dd-4d69-a963-59595076456c (bootstrap) was prepared for execution. 2025-06-02 19:38:24.605156 | orchestrator | 2025-06-02 19:38:24 | INFO  | It takes a moment until task 2de1324e-b6dd-4d69-a963-59595076456c (bootstrap) has been started and output is visible here. 2025-06-02 19:38:28.558437 | orchestrator | 2025-06-02 19:38:28.559222 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-02 19:38:28.563276 | orchestrator | 2025-06-02 19:38:28.565080 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-02 19:38:28.566406 | orchestrator | Monday 02 June 2025 19:38:28 +0000 (0:00:00.162) 0:00:00.162 *********** 2025-06-02 19:38:28.636285 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:28.656552 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:28.686203 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:28.709773 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:28.783531 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:28.785218 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:28.786475 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:28.790148 | orchestrator | 2025-06-02 19:38:28.790822 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 19:38:28.792146 | orchestrator | 2025-06-02 19:38:28.792949 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:38:28.793982 | orchestrator | Monday 02 June 2025 19:38:28 +0000 (0:00:00.228) 0:00:00.391 *********** 2025-06-02 19:38:32.438507 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:32.438619 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:32.439721 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:32.440676 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:32.441009 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:32.441621 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:32.442616 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:32.442921 | orchestrator | 2025-06-02 19:38:32.444608 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-02 19:38:32.446005 | orchestrator | 2025-06-02 19:38:32.446281 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:38:32.447305 | orchestrator | Monday 02 June 2025 19:38:32 +0000 (0:00:03.654) 0:00:04.045 *********** 2025-06-02 19:38:32.537477 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 19:38:32.542155 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 19:38:32.542207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-02 19:38:32.605246 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 19:38:32.609184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 19:38:32.677491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 19:38:32.677905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-02 19:38:32.678273 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 19:38:32.678639 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-02 19:38:32.678829 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-02 19:38:32.681273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 19:38:32.681410 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 19:38:32.681900 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-02 19:38:32.684270 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-02 19:38:32.998315 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-02 19:38:32.998837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 19:38:32.999250 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 19:38:33.000635 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-02 19:38:33.002637 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 19:38:33.002688 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-02 19:38:33.003170 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 19:38:33.003923 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:38:33.004668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 19:38:33.005435 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-02 19:38:33.006475 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-02 19:38:33.006497 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 19:38:33.007033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 19:38:33.007843 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-02 19:38:33.008375 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:38:33.009036 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 19:38:33.009399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 19:38:33.010082 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-02 19:38:33.010936 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 19:38:33.012099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 19:38:33.012184 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-02 19:38:33.012206 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 19:38:33.012724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 19:38:33.012746 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:38:33.013177 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-02 19:38:33.014517 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 19:38:33.014610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 19:38:33.014630 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-02 19:38:33.015034 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-02 19:38:33.016262 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 19:38:33.016283 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:38:33.016643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 19:38:33.017080 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 19:38:33.017731 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:38:33.018111 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-02 19:38:33.018728 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 19:38:33.019167 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 19:38:33.019783 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 19:38:33.020071 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:38:33.020590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 19:38:33.020928 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 19:38:33.021375 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:38:33.021928 | orchestrator | 2025-06-02 19:38:33.022368 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-02 19:38:33.022749 | orchestrator | 2025-06-02 19:38:33.023246 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-02 19:38:33.025168 | orchestrator | Monday 02 June 2025 19:38:32 +0000 (0:00:00.560) 0:00:04.606 *********** 2025-06-02 19:38:34.244084 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:34.244299 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:34.246207 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:34.246292 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:34.247209 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:34.248011 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:34.248631 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:34.251448 | orchestrator | 2025-06-02 19:38:34.252152 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-02 19:38:34.252846 | orchestrator | Monday 02 June 2025 19:38:34 +0000 (0:00:01.243) 0:00:05.850 *********** 2025-06-02 19:38:35.393523 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:35.393629 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:35.393644 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:35.393713 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:35.394639 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:35.395768 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:35.396856 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:35.397731 | orchestrator | 2025-06-02 19:38:35.398538 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-02 19:38:35.399176 | orchestrator | Monday 02 June 2025 19:38:35 +0000 (0:00:01.144) 0:00:06.994 *********** 2025-06-02 19:38:35.655744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:38:35.656672 | orchestrator | 2025-06-02 19:38:35.657574 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-02 19:38:35.658727 | orchestrator | Monday 02 June 2025 19:38:35 +0000 (0:00:00.268) 0:00:07.262 *********** 2025-06-02 19:38:37.555627 | orchestrator | changed: [testbed-manager] 2025-06-02 19:38:37.557178 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:38:37.557957 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:38:37.559476 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:38:37.561820 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:38:37.562957 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:38:37.563021 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:38:37.564479 | orchestrator | 2025-06-02 19:38:37.564502 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-02 19:38:37.564905 | orchestrator | Monday 02 June 2025 19:38:37 +0000 (0:00:01.898) 0:00:09.160 *********** 2025-06-02 19:38:37.648215 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:38:37.836739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:38:37.837691 | orchestrator | 2025-06-02 19:38:37.841455 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-02 19:38:37.842164 | orchestrator | Monday 02 June 2025 19:38:37 +0000 (0:00:00.282) 0:00:09.443 *********** 2025-06-02 19:38:38.810781 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:38:38.811063 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:38:38.812422 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:38:38.813695 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:38:38.814813 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:38:38.815853 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:38:38.816595 | orchestrator | 2025-06-02 19:38:38.817266 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-02 19:38:38.818209 | orchestrator | Monday 02 June 2025 19:38:38 +0000 (0:00:00.973) 0:00:10.416 *********** 2025-06-02 19:38:38.880478 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:38:39.354267 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:38:39.354430 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:38:39.356559 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:38:39.359462 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:38:39.359488 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:38:39.359499 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:38:39.359511 | orchestrator | 2025-06-02 19:38:39.360015 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-02 19:38:39.361180 | orchestrator | Monday 02 June 2025 19:38:39 +0000 (0:00:00.544) 0:00:10.961 *********** 2025-06-02 19:38:39.446807 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:38:39.474132 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:38:39.494712 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:38:39.786399 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:38:39.790515 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:38:39.790551 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:38:39.790563 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:39.790575 | orchestrator | 2025-06-02 19:38:39.790888 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 19:38:39.791780 | orchestrator | Monday 02 June 2025 19:38:39 +0000 (0:00:00.431) 0:00:11.392 *********** 2025-06-02 19:38:39.853971 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:38:39.877445 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:38:39.898419 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:38:39.924404 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:38:39.981988 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:38:39.982786 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:38:39.984233 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:38:39.985640 | orchestrator | 2025-06-02 19:38:39.986233 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 19:38:39.987177 | orchestrator | Monday 02 June 2025 19:38:39 +0000 (0:00:00.196) 0:00:11.589 *********** 2025-06-02 19:38:40.252974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:38:40.255543 | orchestrator | 2025-06-02 19:38:40.256322 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 19:38:40.258117 | orchestrator | Monday 02 June 2025 19:38:40 +0000 (0:00:00.269) 0:00:11.859 *********** 2025-06-02 19:38:40.532960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:38:40.533849 | orchestrator | 2025-06-02 19:38:40.535366 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 19:38:40.536420 | orchestrator | Monday 02 June 2025 19:38:40 +0000 (0:00:00.281) 0:00:12.140 *********** 2025-06-02 19:38:41.710287 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:41.710393 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:41.710812 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:41.712739 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:41.713796 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:41.714829 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:41.715962 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:41.716971 | orchestrator | 2025-06-02 19:38:41.717453 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 19:38:41.718624 | orchestrator | Monday 02 June 2025 19:38:41 +0000 (0:00:01.174) 0:00:13.315 *********** 2025-06-02 19:38:41.783088 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:38:41.814136 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:38:41.835612 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:38:41.860111 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:38:41.909214 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:38:41.910079 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:38:41.910864 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:38:41.911679 | orchestrator | 2025-06-02 19:38:41.912520 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 19:38:41.913433 | orchestrator | Monday 02 June 2025 19:38:41 +0000 (0:00:00.201) 0:00:13.516 *********** 2025-06-02 19:38:42.423806 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:42.424167 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:42.425446 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:42.427029 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:42.427166 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:42.428680 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:42.429414 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:42.430438 | orchestrator | 2025-06-02 19:38:42.430913 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 19:38:42.431942 | orchestrator | Monday 02 June 2025 19:38:42 +0000 (0:00:00.513) 0:00:14.030 *********** 2025-06-02 19:38:42.497446 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:38:42.522512 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:38:42.546421 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:38:42.574720 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:38:42.647080 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:38:42.647846 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:38:42.648870 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:38:42.650292 | orchestrator | 2025-06-02 19:38:42.651264 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 19:38:42.652078 | orchestrator | Monday 02 June 2025 19:38:42 +0000 (0:00:00.223) 0:00:14.253 *********** 2025-06-02 19:38:43.172946 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:43.173224 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:38:43.173956 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:38:43.175455 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:38:43.175827 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:38:43.176610 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:38:43.178315 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:38:43.179211 | orchestrator | 2025-06-02 19:38:43.180108 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 19:38:43.181037 | orchestrator | Monday 02 June 2025 19:38:43 +0000 (0:00:00.525) 0:00:14.779 *********** 2025-06-02 19:38:44.274277 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:44.274396 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:38:44.274701 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:38:44.275996 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:38:44.276984 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:38:44.278305 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:38:44.278705 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:38:44.279680 | orchestrator | 2025-06-02 19:38:44.280045 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 19:38:44.280817 | orchestrator | Monday 02 June 2025 19:38:44 +0000 (0:00:01.099) 0:00:15.878 *********** 2025-06-02 19:38:45.376608 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:45.377018 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:45.379521 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:45.379549 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:45.379559 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:45.379568 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:45.380044 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:45.380819 | orchestrator | 2025-06-02 19:38:45.381532 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 19:38:45.381936 | orchestrator | Monday 02 June 2025 19:38:45 +0000 (0:00:01.104) 0:00:16.983 *********** 2025-06-02 19:38:45.758418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:38:45.758517 | orchestrator | 2025-06-02 19:38:45.758531 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 19:38:45.758596 | orchestrator | Monday 02 June 2025 19:38:45 +0000 (0:00:00.379) 0:00:17.362 *********** 2025-06-02 19:38:45.844348 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:38:46.961786 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:38:46.965995 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:38:46.967164 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:38:46.967631 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:38:46.968626 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:38:46.969591 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:38:46.970207 | orchestrator | 2025-06-02 19:38:46.971010 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 19:38:46.972404 | orchestrator | Monday 02 June 2025 19:38:46 +0000 (0:00:01.205) 0:00:18.567 *********** 2025-06-02 19:38:47.040241 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:47.071960 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:47.100118 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:47.121312 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:47.174202 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:47.174286 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:47.175711 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:47.175737 | orchestrator | 2025-06-02 19:38:47.179643 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 19:38:47.182147 | orchestrator | Monday 02 June 2025 19:38:47 +0000 (0:00:00.214) 0:00:18.781 *********** 2025-06-02 19:38:47.260059 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:47.286233 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:47.310002 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:47.336410 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:47.398878 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:47.399138 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:47.399617 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:47.400149 | orchestrator | 2025-06-02 19:38:47.400568 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 19:38:47.403684 | orchestrator | Monday 02 June 2025 19:38:47 +0000 (0:00:00.225) 0:00:19.006 *********** 2025-06-02 19:38:47.471069 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:47.493328 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:47.518687 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:47.540610 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:47.600593 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:47.600689 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:47.600705 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:47.600716 | orchestrator | 2025-06-02 19:38:47.600728 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 19:38:47.601680 | orchestrator | Monday 02 June 2025 19:38:47 +0000 (0:00:00.196) 0:00:19.203 *********** 2025-06-02 19:38:47.874275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:38:47.874414 | orchestrator | 2025-06-02 19:38:47.875540 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 19:38:47.876259 | orchestrator | Monday 02 June 2025 19:38:47 +0000 (0:00:00.277) 0:00:19.480 *********** 2025-06-02 19:38:48.383398 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:48.383485 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:48.383536 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:48.384168 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:48.384957 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:48.386150 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:48.386841 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:48.387768 | orchestrator | 2025-06-02 19:38:48.388714 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 19:38:48.389632 | orchestrator | Monday 02 June 2025 19:38:48 +0000 (0:00:00.505) 0:00:19.986 *********** 2025-06-02 19:38:48.450780 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:38:48.474779 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:38:48.497484 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:38:48.525846 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:38:48.582632 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:38:48.583327 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:38:48.583956 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:38:48.585586 | orchestrator | 2025-06-02 19:38:48.586557 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 19:38:48.587806 | orchestrator | Monday 02 June 2025 19:38:48 +0000 (0:00:00.204) 0:00:20.190 *********** 2025-06-02 19:38:49.564409 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:49.566642 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:49.566787 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:49.566801 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:49.567688 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:38:49.568773 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:38:49.569136 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:38:49.570344 | orchestrator | 2025-06-02 19:38:49.571088 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 19:38:49.571795 | orchestrator | Monday 02 June 2025 19:38:49 +0000 (0:00:00.978) 0:00:21.169 *********** 2025-06-02 19:38:50.136397 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:50.139239 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:50.140203 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:50.140236 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:50.140643 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:38:50.141333 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:38:50.141969 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:38:50.142488 | orchestrator | 2025-06-02 19:38:50.143159 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 19:38:50.143713 | orchestrator | Monday 02 June 2025 19:38:50 +0000 (0:00:00.572) 0:00:21.741 *********** 2025-06-02 19:38:51.159182 | orchestrator | ok: [testbed-manager] 2025-06-02 19:38:51.159764 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:38:51.160118 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:38:51.160527 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:38:51.161025 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:38:51.161359 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:38:51.162084 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:38:51.162235 | orchestrator | 2025-06-02 19:38:51.162642 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 19:38:51.163004 | orchestrator | Monday 02 June 2025 19:38:51 +0000 (0:00:01.024) 0:00:22.765 *********** 2025-06-02 19:39:04.717700 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:04.717826 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:04.717842 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:04.717854 | orchestrator | changed: [testbed-manager] 2025-06-02 19:39:04.717867 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:04.718895 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:04.719684 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:04.720405 | orchestrator | 2025-06-02 19:39:04.721131 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-02 19:39:04.721785 | orchestrator | Monday 02 June 2025 19:39:04 +0000 (0:00:13.549) 0:00:36.315 *********** 2025-06-02 19:39:04.782399 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:04.808357 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:04.831008 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:04.855174 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:04.906814 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:04.906955 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:04.908762 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:04.908861 | orchestrator | 2025-06-02 19:39:04.909457 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-02 19:39:04.909484 | orchestrator | Monday 02 June 2025 19:39:04 +0000 (0:00:00.199) 0:00:36.514 *********** 2025-06-02 19:39:04.979157 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:05.008008 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:05.040388 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:05.062929 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:05.133368 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:05.133562 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:05.135054 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:05.135244 | orchestrator | 2025-06-02 19:39:05.136726 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-02 19:39:05.137838 | orchestrator | Monday 02 June 2025 19:39:05 +0000 (0:00:00.225) 0:00:36.740 *********** 2025-06-02 19:39:05.208860 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:05.232236 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:05.257467 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:05.282901 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:05.357821 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:05.357918 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:05.357932 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:05.357945 | orchestrator | 2025-06-02 19:39:05.358093 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-02 19:39:05.358113 | orchestrator | Monday 02 June 2025 19:39:05 +0000 (0:00:00.222) 0:00:36.963 *********** 2025-06-02 19:39:05.628851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:39:05.632021 | orchestrator | 2025-06-02 19:39:05.632052 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-02 19:39:05.632145 | orchestrator | Monday 02 June 2025 19:39:05 +0000 (0:00:00.267) 0:00:37.231 *********** 2025-06-02 19:39:07.085913 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:07.086119 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:07.086848 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:07.087922 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:07.089836 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:07.090762 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:07.091560 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:07.092382 | orchestrator | 2025-06-02 19:39:07.093295 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-02 19:39:07.098007 | orchestrator | Monday 02 June 2025 19:39:07 +0000 (0:00:01.459) 0:00:38.690 *********** 2025-06-02 19:39:08.085234 | orchestrator | changed: [testbed-manager] 2025-06-02 19:39:08.086684 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:08.087827 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:08.089230 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:08.090398 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:08.091832 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:08.092789 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:08.093610 | orchestrator | 2025-06-02 19:39:08.094752 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-02 19:39:08.095399 | orchestrator | Monday 02 June 2025 19:39:08 +0000 (0:00:01.001) 0:00:39.691 *********** 2025-06-02 19:39:08.857968 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:08.862129 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:08.862161 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:08.863351 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:08.863980 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:08.864552 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:08.865050 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:08.865939 | orchestrator | 2025-06-02 19:39:08.866568 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-02 19:39:08.867173 | orchestrator | Monday 02 June 2025 19:39:08 +0000 (0:00:00.773) 0:00:40.465 *********** 2025-06-02 19:39:09.136744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:39:09.136865 | orchestrator | 2025-06-02 19:39:09.136909 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-02 19:39:09.136923 | orchestrator | Monday 02 June 2025 19:39:09 +0000 (0:00:00.273) 0:00:40.739 *********** 2025-06-02 19:39:10.170364 | orchestrator | changed: [testbed-manager] 2025-06-02 19:39:10.170507 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:10.171409 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:10.172171 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:10.173115 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:10.175010 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:10.175033 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:10.175467 | orchestrator | 2025-06-02 19:39:10.175893 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-02 19:39:10.176346 | orchestrator | Monday 02 June 2025 19:39:10 +0000 (0:00:01.033) 0:00:41.772 *********** 2025-06-02 19:39:10.281959 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:39:10.303309 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:39:10.327197 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:39:10.455593 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:39:10.456385 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:39:10.460201 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:39:10.460226 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:39:10.460238 | orchestrator | 2025-06-02 19:39:10.460898 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-02 19:39:10.462125 | orchestrator | Monday 02 June 2025 19:39:10 +0000 (0:00:00.289) 0:00:42.062 *********** 2025-06-02 19:39:21.693174 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:21.693279 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:21.693292 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:21.693357 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:21.695217 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:21.695273 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:21.695287 | orchestrator | changed: [testbed-manager] 2025-06-02 19:39:21.695299 | orchestrator | 2025-06-02 19:39:21.695945 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-02 19:39:21.696248 | orchestrator | Monday 02 June 2025 19:39:21 +0000 (0:00:11.235) 0:00:53.297 *********** 2025-06-02 19:39:22.662301 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:22.662385 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:22.662622 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:22.664317 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:22.664800 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:22.665873 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:22.666340 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:22.666771 | orchestrator | 2025-06-02 19:39:22.667488 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-02 19:39:22.668486 | orchestrator | Monday 02 June 2025 19:39:22 +0000 (0:00:00.969) 0:00:54.267 *********** 2025-06-02 19:39:23.552751 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:23.553983 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:23.554108 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:23.555927 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:23.557843 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:23.558913 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:23.560019 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:23.560586 | orchestrator | 2025-06-02 19:39:23.561022 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-02 19:39:23.561694 | orchestrator | Monday 02 June 2025 19:39:23 +0000 (0:00:00.891) 0:00:55.158 *********** 2025-06-02 19:39:23.643690 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:23.674310 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:23.702431 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:23.728266 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:23.779985 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:23.780070 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:23.780925 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:23.781292 | orchestrator | 2025-06-02 19:39:23.781922 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-02 19:39:23.782288 | orchestrator | Monday 02 June 2025 19:39:23 +0000 (0:00:00.228) 0:00:55.386 *********** 2025-06-02 19:39:23.851605 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:23.878350 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:23.903364 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:23.931118 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:23.980517 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:23.981326 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:23.983763 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:23.983900 | orchestrator | 2025-06-02 19:39:23.985546 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-02 19:39:23.986204 | orchestrator | Monday 02 June 2025 19:39:23 +0000 (0:00:00.201) 0:00:55.588 *********** 2025-06-02 19:39:24.251036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:39:24.252266 | orchestrator | 2025-06-02 19:39:24.253210 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-02 19:39:24.254209 | orchestrator | Monday 02 June 2025 19:39:24 +0000 (0:00:00.268) 0:00:55.857 *********** 2025-06-02 19:39:25.884091 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:25.885920 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:25.885986 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:25.886871 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:25.889155 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:25.890057 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:25.890449 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:25.891693 | orchestrator | 2025-06-02 19:39:25.892719 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-02 19:39:25.892834 | orchestrator | Monday 02 June 2025 19:39:25 +0000 (0:00:01.630) 0:00:57.487 *********** 2025-06-02 19:39:26.471325 | orchestrator | changed: [testbed-manager] 2025-06-02 19:39:26.471526 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:26.472301 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:26.472982 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:26.474103 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:26.475549 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:26.476376 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:26.477178 | orchestrator | 2025-06-02 19:39:26.477601 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-02 19:39:26.477982 | orchestrator | Monday 02 June 2025 19:39:26 +0000 (0:00:00.591) 0:00:58.078 *********** 2025-06-02 19:39:26.547462 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:26.573371 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:26.599970 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:26.628116 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:26.703564 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:26.704328 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:26.704538 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:26.705865 | orchestrator | 2025-06-02 19:39:26.706541 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-02 19:39:26.707384 | orchestrator | Monday 02 June 2025 19:39:26 +0000 (0:00:00.232) 0:00:58.310 *********** 2025-06-02 19:39:27.886313 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:27.887275 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:27.887785 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:27.889535 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:27.890709 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:27.892185 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:27.892998 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:27.894527 | orchestrator | 2025-06-02 19:39:27.895435 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-02 19:39:27.896906 | orchestrator | Monday 02 June 2025 19:39:27 +0000 (0:00:01.180) 0:00:59.491 *********** 2025-06-02 19:39:29.591129 | orchestrator | changed: [testbed-manager] 2025-06-02 19:39:29.591345 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:39:29.592898 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:39:29.594916 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:39:29.595902 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:39:29.596725 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:39:29.597680 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:39:29.598371 | orchestrator | 2025-06-02 19:39:29.598845 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-02 19:39:29.599675 | orchestrator | Monday 02 June 2025 19:39:29 +0000 (0:00:01.705) 0:01:01.196 *********** 2025-06-02 19:39:31.684964 | orchestrator | ok: [testbed-manager] 2025-06-02 19:39:31.685086 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:39:31.685218 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:39:31.687743 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:39:31.689157 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:39:31.690135 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:39:31.690869 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:39:31.691483 | orchestrator | 2025-06-02 19:39:31.692212 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-02 19:39:31.692933 | orchestrator | Monday 02 June 2025 19:39:31 +0000 (0:00:02.093) 0:01:03.290 *********** 2025-06-02 19:40:07.854342 | orchestrator | ok: [testbed-manager] 2025-06-02 19:40:07.854454 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:40:07.854470 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:40:07.854551 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:40:07.854566 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:40:07.855483 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:40:07.855796 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:40:07.857313 | orchestrator | 2025-06-02 19:40:07.857939 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-02 19:40:07.858484 | orchestrator | Monday 02 June 2025 19:40:07 +0000 (0:00:36.167) 0:01:39.457 *********** 2025-06-02 19:41:23.452086 | orchestrator | changed: [testbed-manager] 2025-06-02 19:41:23.452220 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:41:23.454561 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:41:23.456143 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:41:23.457350 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:41:23.458412 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:41:23.459798 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:41:23.460505 | orchestrator | 2025-06-02 19:41:23.461246 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-02 19:41:23.462095 | orchestrator | Monday 02 June 2025 19:41:23 +0000 (0:01:15.586) 0:02:55.044 *********** 2025-06-02 19:41:25.044697 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:25.045771 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:25.046970 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:25.048463 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:25.051618 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:25.051778 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:25.053331 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:25.054263 | orchestrator | 2025-06-02 19:41:25.055461 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-02 19:41:25.055853 | orchestrator | Monday 02 June 2025 19:41:25 +0000 (0:00:01.605) 0:02:56.650 *********** 2025-06-02 19:41:37.460877 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:37.461001 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:37.461018 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:37.462187 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:37.466313 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:37.467015 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:37.468774 | orchestrator | changed: [testbed-manager] 2025-06-02 19:41:37.469783 | orchestrator | 2025-06-02 19:41:37.470790 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-02 19:41:37.472083 | orchestrator | Monday 02 June 2025 19:41:37 +0000 (0:00:12.411) 0:03:09.061 *********** 2025-06-02 19:41:37.892572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-02 19:41:37.894323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-02 19:41:37.896183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-02 19:41:37.897508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-02 19:41:37.898811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-02 19:41:37.901281 | orchestrator | 2025-06-02 19:41:37.901823 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-02 19:41:37.902562 | orchestrator | Monday 02 June 2025 19:41:37 +0000 (0:00:00.437) 0:03:09.499 *********** 2025-06-02 19:41:37.955125 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 19:41:37.991534 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:41:37.992886 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 19:41:38.036287 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:41:38.036572 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 19:41:38.037597 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 19:41:38.066139 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:41:38.095749 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:41:38.591182 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 19:41:38.591568 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 19:41:38.593781 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 19:41:38.593826 | orchestrator | 2025-06-02 19:41:38.596744 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-02 19:41:38.598243 | orchestrator | Monday 02 June 2025 19:41:38 +0000 (0:00:00.697) 0:03:10.196 *********** 2025-06-02 19:41:38.666894 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 19:41:38.667718 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 19:41:38.668437 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 19:41:38.668931 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 19:41:38.669579 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 19:41:38.669827 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 19:41:38.670117 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 19:41:38.673021 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 19:41:38.673071 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 19:41:38.673083 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 19:41:38.673094 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 19:41:38.673105 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 19:41:38.710312 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 19:41:38.712267 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:41:38.713921 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 19:41:38.714728 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 19:41:38.715383 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 19:41:38.716144 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 19:41:38.716969 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 19:41:38.717503 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 19:41:38.718131 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 19:41:38.720370 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 19:41:38.720395 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 19:41:38.720406 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 19:41:38.720417 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 19:41:38.747207 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:41:38.747302 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 19:41:38.747586 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 19:41:38.748155 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 19:41:38.748824 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 19:41:38.749264 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 19:41:38.749776 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 19:41:38.750399 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 19:41:38.781735 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 19:41:38.781866 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:41:38.782464 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 19:41:38.783021 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 19:41:38.783387 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 19:41:38.783904 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 19:41:38.784487 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 19:41:38.784758 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 19:41:38.785528 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 19:41:38.785561 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 19:41:38.807376 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:41:43.436126 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 19:41:43.438654 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 19:41:43.441883 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 19:41:43.444439 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 19:41:43.445726 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 19:41:43.447280 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 19:41:43.448150 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 19:41:43.449250 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 19:41:43.450224 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 19:41:43.451041 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 19:41:43.451687 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 19:41:43.452362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 19:41:43.453161 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 19:41:43.453764 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 19:41:43.454498 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 19:41:43.455255 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 19:41:43.455880 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 19:41:43.456111 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 19:41:43.456708 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 19:41:43.457350 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 19:41:43.457658 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 19:41:43.458282 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 19:41:43.459070 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 19:41:43.459916 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 19:41:43.459959 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 19:41:43.460221 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 19:41:43.460673 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 19:41:43.461163 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 19:41:43.461838 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 19:41:43.462005 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 19:41:43.462481 | orchestrator | 2025-06-02 19:41:43.462868 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-02 19:41:43.463208 | orchestrator | Monday 02 June 2025 19:41:43 +0000 (0:00:04.841) 0:03:15.037 *********** 2025-06-02 19:41:45.037295 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:41:45.038088 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:41:45.039388 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:41:45.040716 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:41:45.041893 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:41:45.042538 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:41:45.043647 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 19:41:45.044384 | orchestrator | 2025-06-02 19:41:45.044885 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-02 19:41:45.045334 | orchestrator | Monday 02 June 2025 19:41:45 +0000 (0:00:01.604) 0:03:16.642 *********** 2025-06-02 19:41:45.091975 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 19:41:45.118213 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:41:45.193264 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 19:41:45.567760 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:41:45.570385 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 19:41:45.570427 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:41:45.571742 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 19:41:45.574570 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:41:45.574652 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 19:41:45.575053 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 19:41:45.575493 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 19:41:45.576217 | orchestrator | 2025-06-02 19:41:45.577161 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-02 19:41:45.577703 | orchestrator | Monday 02 June 2025 19:41:45 +0000 (0:00:00.531) 0:03:17.173 *********** 2025-06-02 19:41:45.626924 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 19:41:45.660296 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:41:45.740045 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 19:41:47.141390 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:41:47.141858 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 19:41:47.145216 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:41:47.145253 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 19:41:47.145345 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:41:47.145358 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 19:41:47.145410 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 19:41:47.145922 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 19:41:47.146897 | orchestrator | 2025-06-02 19:41:47.147077 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-02 19:41:47.147559 | orchestrator | Monday 02 June 2025 19:41:47 +0000 (0:00:01.574) 0:03:18.747 *********** 2025-06-02 19:41:47.217510 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:41:47.242257 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:41:47.271901 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:41:47.296578 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:41:47.414366 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:41:47.414855 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:41:47.416962 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:41:47.416998 | orchestrator | 2025-06-02 19:41:47.418088 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-02 19:41:47.418872 | orchestrator | Monday 02 June 2025 19:41:47 +0000 (0:00:00.272) 0:03:19.020 *********** 2025-06-02 19:41:53.188027 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:53.188330 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:53.189770 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:53.190442 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:53.190987 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:53.192487 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:53.193206 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:53.193927 | orchestrator | 2025-06-02 19:41:53.194466 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-02 19:41:53.195378 | orchestrator | Monday 02 June 2025 19:41:53 +0000 (0:00:05.773) 0:03:24.794 *********** 2025-06-02 19:41:53.230276 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-02 19:41:53.266959 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:41:53.308181 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-02 19:41:53.351369 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-02 19:41:53.351517 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:41:53.352342 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-02 19:41:53.387700 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:41:53.387839 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-02 19:41:53.421153 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:41:53.494514 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:41:53.495297 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-02 19:41:53.496194 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:41:53.497327 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-02 19:41:53.498470 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:41:53.499422 | orchestrator | 2025-06-02 19:41:53.500101 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-02 19:41:53.500579 | orchestrator | Monday 02 June 2025 19:41:53 +0000 (0:00:00.307) 0:03:25.101 *********** 2025-06-02 19:41:54.590767 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-02 19:41:54.591940 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-02 19:41:54.592497 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-02 19:41:54.593532 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-02 19:41:54.594627 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-02 19:41:54.595538 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-02 19:41:54.596690 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-02 19:41:54.597146 | orchestrator | 2025-06-02 19:41:54.598258 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-02 19:41:54.598971 | orchestrator | Monday 02 June 2025 19:41:54 +0000 (0:00:01.094) 0:03:26.195 *********** 2025-06-02 19:41:55.233076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:41:55.233614 | orchestrator | 2025-06-02 19:41:55.234541 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-02 19:41:55.235538 | orchestrator | Monday 02 June 2025 19:41:55 +0000 (0:00:00.640) 0:03:26.835 *********** 2025-06-02 19:41:56.403094 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:56.406233 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:56.406948 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:56.408841 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:56.409331 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:56.410263 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:56.410932 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:56.411503 | orchestrator | 2025-06-02 19:41:56.412027 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-02 19:41:56.412486 | orchestrator | Monday 02 June 2025 19:41:56 +0000 (0:00:01.173) 0:03:28.008 *********** 2025-06-02 19:41:57.013798 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:57.014010 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:57.016064 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:57.018352 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:57.019058 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:57.020728 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:57.021755 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:57.023151 | orchestrator | 2025-06-02 19:41:57.023944 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-02 19:41:57.025109 | orchestrator | Monday 02 June 2025 19:41:57 +0000 (0:00:00.608) 0:03:28.617 *********** 2025-06-02 19:41:57.645322 | orchestrator | changed: [testbed-manager] 2025-06-02 19:41:57.645425 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:41:57.649842 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:41:57.651059 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:41:57.652035 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:41:57.652994 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:41:57.653973 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:41:57.655091 | orchestrator | 2025-06-02 19:41:57.656957 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-02 19:41:57.657299 | orchestrator | Monday 02 June 2025 19:41:57 +0000 (0:00:00.632) 0:03:29.250 *********** 2025-06-02 19:41:58.256880 | orchestrator | ok: [testbed-manager] 2025-06-02 19:41:58.260839 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:41:58.260924 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:41:58.262066 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:41:58.262405 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:41:58.263166 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:41:58.264144 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:41:58.264827 | orchestrator | 2025-06-02 19:41:58.265453 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-02 19:41:58.266355 | orchestrator | Monday 02 June 2025 19:41:58 +0000 (0:00:00.612) 0:03:29.862 *********** 2025-06-02 19:41:59.201966 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748891969.7420592, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.202142 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892039.394721, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.203283 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892041.048422, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.204496 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892019.7383676, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.205592 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892040.9295623, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.207129 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892043.7051685, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.208153 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748892028.6013694, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.208991 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891996.8514569, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.209509 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891930.2900095, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.209790 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891936.7579708, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.210223 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891938.9003906, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.210912 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891922.7132914, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.211400 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891926.761522, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.211697 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748891936.7536547, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 19:41:59.212012 | orchestrator | 2025-06-02 19:41:59.212503 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-02 19:41:59.213315 | orchestrator | Monday 02 June 2025 19:41:59 +0000 (0:00:00.945) 0:03:30.807 *********** 2025-06-02 19:42:00.395371 | orchestrator | changed: [testbed-manager] 2025-06-02 19:42:00.395474 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:42:00.396310 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:42:00.397574 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:42:00.398536 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:42:00.399277 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:42:00.400173 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:42:00.400930 | orchestrator | 2025-06-02 19:42:00.401529 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-02 19:42:00.402298 | orchestrator | Monday 02 June 2025 19:42:00 +0000 (0:00:01.191) 0:03:31.999 *********** 2025-06-02 19:42:01.537317 | orchestrator | changed: [testbed-manager] 2025-06-02 19:42:01.538954 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:42:01.541989 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:42:01.542053 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:42:01.542067 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:42:01.542761 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:42:01.543912 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:42:01.545035 | orchestrator | 2025-06-02 19:42:01.545748 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-02 19:42:01.546683 | orchestrator | Monday 02 June 2025 19:42:01 +0000 (0:00:01.141) 0:03:33.141 *********** 2025-06-02 19:42:02.741566 | orchestrator | changed: [testbed-manager] 2025-06-02 19:42:02.742491 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:42:02.745566 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:42:02.745636 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:42:02.748924 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:42:02.750298 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:42:02.751445 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:42:02.752655 | orchestrator | 2025-06-02 19:42:02.753985 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-02 19:42:02.754781 | orchestrator | Monday 02 June 2025 19:42:02 +0000 (0:00:01.205) 0:03:34.346 *********** 2025-06-02 19:42:02.819324 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:42:02.862339 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:42:02.918932 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:42:02.957741 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:42:02.993239 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:42:03.065940 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:42:03.066723 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:42:03.067517 | orchestrator | 2025-06-02 19:42:03.068278 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-02 19:42:03.069207 | orchestrator | Monday 02 June 2025 19:42:03 +0000 (0:00:00.325) 0:03:34.671 *********** 2025-06-02 19:42:03.916135 | orchestrator | ok: [testbed-manager] 2025-06-02 19:42:03.916916 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:42:03.918407 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:42:03.919460 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:42:03.920887 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:42:03.921231 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:42:03.922002 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:42:03.923019 | orchestrator | 2025-06-02 19:42:03.923758 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-02 19:42:03.924660 | orchestrator | Monday 02 June 2025 19:42:03 +0000 (0:00:00.851) 0:03:35.522 *********** 2025-06-02 19:42:04.352427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:42:04.352531 | orchestrator | 2025-06-02 19:42:04.354177 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-02 19:42:04.354659 | orchestrator | Monday 02 June 2025 19:42:04 +0000 (0:00:00.435) 0:03:35.958 *********** 2025-06-02 19:42:12.146694 | orchestrator | ok: [testbed-manager] 2025-06-02 19:42:12.146821 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:42:12.146837 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:42:12.146912 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:42:12.146998 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:42:12.147701 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:42:12.147899 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:42:12.149556 | orchestrator | 2025-06-02 19:42:12.149641 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-02 19:42:12.149666 | orchestrator | Monday 02 June 2025 19:42:12 +0000 (0:00:07.794) 0:03:43.752 *********** 2025-06-02 19:42:13.309142 | orchestrator | ok: [testbed-manager] 2025-06-02 19:42:13.309948 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:42:13.310719 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:42:13.312098 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:42:13.312886 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:42:13.313858 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:42:13.314438 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:42:13.315141 | orchestrator | 2025-06-02 19:42:13.315960 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-02 19:42:13.316586 | orchestrator | Monday 02 June 2025 19:42:13 +0000 (0:00:01.162) 0:03:44.915 *********** 2025-06-02 19:42:14.384316 | orchestrator | ok: [testbed-manager] 2025-06-02 19:42:14.384548 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:42:14.384670 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:42:14.387697 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:42:14.388227 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:42:14.389032 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:42:14.389751 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:42:14.390245 | orchestrator | 2025-06-02 19:42:14.391037 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-02 19:42:14.391825 | orchestrator | Monday 02 June 2025 19:42:14 +0000 (0:00:01.073) 0:03:45.988 *********** 2025-06-02 19:42:14.890621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:42:14.891038 | orchestrator | 2025-06-02 19:42:14.892495 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-02 19:42:14.895341 | orchestrator | Monday 02 June 2025 19:42:14 +0000 (0:00:00.508) 0:03:46.497 *********** 2025-06-02 19:42:23.288654 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:42:23.289972 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:42:23.290316 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:42:23.291257 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:42:23.292847 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:42:23.294895 | orchestrator | changed: [testbed-manager] 2025-06-02 19:42:23.296948 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:42:23.298126 | orchestrator | 2025-06-02 19:42:23.298958 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-02 19:42:23.299826 | orchestrator | Monday 02 June 2025 19:42:23 +0000 (0:00:08.396) 0:03:54.894 *********** 2025-06-02 19:42:23.921679 | orchestrator | changed: [testbed-manager] 2025-06-02 19:42:23.922845 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:42:23.924625 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:42:23.925808 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:42:23.927085 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:42:23.927821 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:42:23.929340 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:42:23.931087 | orchestrator | 2025-06-02 19:42:23.932324 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-02 19:42:23.932766 | orchestrator | Monday 02 June 2025 19:42:23 +0000 (0:00:00.633) 0:03:55.528 *********** 2025-06-02 19:42:25.095534 | orchestrator | changed: [testbed-manager] 2025-06-02 19:42:25.095681 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:42:25.099365 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:42:25.099921 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:42:25.100915 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:42:25.102104 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:42:25.102724 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:42:25.103436 | orchestrator | 2025-06-02 19:42:25.103949 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-02 19:42:25.105261 | orchestrator | Monday 02 June 2025 19:42:25 +0000 (0:00:01.172) 0:03:56.700 *********** 2025-06-02 19:42:26.130585 | orchestrator | changed: [testbed-manager] 2025-06-02 19:42:26.132334 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:42:26.132372 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:42:26.132385 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:42:26.133168 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:42:26.133191 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:42:26.133341 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:42:26.133658 | orchestrator | 2025-06-02 19:42:26.134073 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-02 19:42:26.134332 | orchestrator | Monday 02 June 2025 19:42:26 +0000 (0:00:01.036) 0:03:57.737 *********** 2025-06-02 19:42:26.210913 | orchestrator | ok: [testbed-manager] 2025-06-02 19:42:26.249344 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:42:26.319161 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:42:26.357496 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:42:26.413917 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:42:26.414246 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:42:26.415571 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:42:26.416338 | orchestrator | 2025-06-02 19:42:26.417798 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-02 19:42:26.418224 | orchestrator | Monday 02 June 2025 19:42:26 +0000 (0:00:00.283) 0:03:58.020 *********** 2025-06-02 19:42:26.532892 | orchestrator | ok: [testbed-manager] 2025-06-02 19:42:26.566288 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:42:26.603539 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:42:26.645768 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:42:26.735709 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:42:26.736521 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:42:26.737832 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:42:26.739211 | orchestrator | 2025-06-02 19:42:26.739891 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-02 19:42:26.740820 | orchestrator | Monday 02 June 2025 19:42:26 +0000 (0:00:00.320) 0:03:58.341 *********** 2025-06-02 19:42:26.843131 | orchestrator | ok: [testbed-manager] 2025-06-02 19:42:26.878289 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:42:26.917458 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:42:26.949130 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:42:27.035491 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:42:27.036129 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:42:27.037311 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:42:27.039102 | orchestrator | 2025-06-02 19:42:27.039177 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-02 19:42:27.039965 | orchestrator | Monday 02 June 2025 19:42:27 +0000 (0:00:00.300) 0:03:58.642 *********** 2025-06-02 19:42:32.722927 | orchestrator | ok: [testbed-manager] 2025-06-02 19:42:32.723040 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:42:32.723057 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:42:32.723203 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:42:32.723944 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:42:32.724513 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:42:32.725246 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:42:32.725878 | orchestrator | 2025-06-02 19:42:32.726821 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-02 19:42:32.727421 | orchestrator | Monday 02 June 2025 19:42:32 +0000 (0:00:05.685) 0:04:04.328 *********** 2025-06-02 19:42:33.110345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:42:33.111386 | orchestrator | 2025-06-02 19:42:33.115846 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-02 19:42:33.120290 | orchestrator | Monday 02 June 2025 19:42:33 +0000 (0:00:00.388) 0:04:04.716 *********** 2025-06-02 19:42:33.193268 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-02 19:42:33.193357 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-02 19:42:33.236500 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-02 19:42:33.240208 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:42:33.241306 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-02 19:42:33.243842 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-02 19:42:33.312215 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:42:33.312295 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-02 19:42:33.314280 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-02 19:42:33.373329 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:42:33.373424 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-02 19:42:33.376015 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-02 19:42:33.376225 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-02 19:42:33.433667 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:42:33.438954 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-02 19:42:33.520554 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:42:33.521199 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-02 19:42:33.524574 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:42:33.525145 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-02 19:42:33.525789 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-02 19:42:33.526410 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:42:33.527030 | orchestrator | 2025-06-02 19:42:33.527610 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-02 19:42:33.528052 | orchestrator | Monday 02 June 2025 19:42:33 +0000 (0:00:00.410) 0:04:05.126 *********** 2025-06-02 19:42:33.912272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:42:33.912375 | orchestrator | 2025-06-02 19:42:33.912751 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-02 19:42:33.913495 | orchestrator | Monday 02 June 2025 19:42:33 +0000 (0:00:00.391) 0:04:05.518 *********** 2025-06-02 19:42:33.952893 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-02 19:42:33.986489 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:42:34.027414 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-02 19:42:34.071395 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:42:34.072039 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-02 19:42:34.115704 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-02 19:42:34.116009 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:42:34.117805 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-02 19:42:34.148103 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:42:34.235573 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-02 19:42:34.235726 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:42:34.236012 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:42:34.236510 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-02 19:42:34.237078 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:42:34.237879 | orchestrator | 2025-06-02 19:42:34.237901 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-02 19:42:34.238122 | orchestrator | Monday 02 June 2025 19:42:34 +0000 (0:00:00.323) 0:04:05.842 *********** 2025-06-02 19:42:34.824568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:42:34.826118 | orchestrator | 2025-06-02 19:42:34.827341 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-02 19:42:34.827982 | orchestrator | Monday 02 June 2025 19:42:34 +0000 (0:00:00.589) 0:04:06.431 *********** 2025-06-02 19:43:08.814127 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:08.814285 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:08.814303 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:08.814315 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:08.814397 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:08.816011 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:08.817457 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:08.823410 | orchestrator | 2025-06-02 19:43:08.823546 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-02 19:43:08.824011 | orchestrator | Monday 02 June 2025 19:43:08 +0000 (0:00:33.986) 0:04:40.417 *********** 2025-06-02 19:43:16.749995 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:16.750176 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:16.750313 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:16.751335 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:16.753551 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:16.754139 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:16.754520 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:16.755031 | orchestrator | 2025-06-02 19:43:16.755760 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-02 19:43:16.755932 | orchestrator | Monday 02 June 2025 19:43:16 +0000 (0:00:07.934) 0:04:48.352 *********** 2025-06-02 19:43:23.818867 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:23.819828 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:23.819936 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:23.820302 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:23.821787 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:23.822776 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:23.823479 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:23.824628 | orchestrator | 2025-06-02 19:43:23.825288 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-02 19:43:23.826085 | orchestrator | Monday 02 June 2025 19:43:23 +0000 (0:00:07.072) 0:04:55.424 *********** 2025-06-02 19:43:25.375113 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:25.375942 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:25.376779 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:25.377804 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:25.378745 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:25.379200 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:25.379683 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:25.379898 | orchestrator | 2025-06-02 19:43:25.380665 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-02 19:43:25.381078 | orchestrator | Monday 02 June 2025 19:43:25 +0000 (0:00:01.555) 0:04:56.980 *********** 2025-06-02 19:43:30.507182 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:30.507291 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:30.508690 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:30.508718 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:30.508729 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:30.509156 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:30.509547 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:30.510271 | orchestrator | 2025-06-02 19:43:30.510956 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-02 19:43:30.511625 | orchestrator | Monday 02 June 2025 19:43:30 +0000 (0:00:05.131) 0:05:02.112 *********** 2025-06-02 19:43:30.928931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:43:30.929963 | orchestrator | 2025-06-02 19:43:30.934113 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-02 19:43:30.934160 | orchestrator | Monday 02 June 2025 19:43:30 +0000 (0:00:00.423) 0:05:02.535 *********** 2025-06-02 19:43:31.628284 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:31.631556 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:31.631665 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:31.631682 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:31.632457 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:31.633163 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:31.634110 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:31.635088 | orchestrator | 2025-06-02 19:43:31.636065 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-02 19:43:31.636349 | orchestrator | Monday 02 June 2025 19:43:31 +0000 (0:00:00.697) 0:05:03.233 *********** 2025-06-02 19:43:33.132117 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:33.132825 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:33.134146 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:33.135166 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:33.138208 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:33.138992 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:33.139673 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:33.140358 | orchestrator | 2025-06-02 19:43:33.144453 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-02 19:43:33.144940 | orchestrator | Monday 02 June 2025 19:43:33 +0000 (0:00:01.504) 0:05:04.738 *********** 2025-06-02 19:43:33.871971 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:33.873368 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:33.874106 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:33.875990 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:33.876773 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:33.877445 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:33.878067 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:33.879371 | orchestrator | 2025-06-02 19:43:33.880727 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-02 19:43:33.881670 | orchestrator | Monday 02 June 2025 19:43:33 +0000 (0:00:00.739) 0:05:05.477 *********** 2025-06-02 19:43:33.937238 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:34.030699 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:43:34.066752 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:43:34.102389 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:43:34.179759 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:43:34.180015 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:43:34.182317 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:43:34.183525 | orchestrator | 2025-06-02 19:43:34.185156 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-02 19:43:34.186842 | orchestrator | Monday 02 June 2025 19:43:34 +0000 (0:00:00.308) 0:05:05.786 *********** 2025-06-02 19:43:34.248550 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:34.280906 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:43:34.313759 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:43:34.344801 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:43:34.373877 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:43:34.560138 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:43:34.561004 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:43:34.571355 | orchestrator | 2025-06-02 19:43:34.571402 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-02 19:43:34.571416 | orchestrator | Monday 02 June 2025 19:43:34 +0000 (0:00:00.378) 0:05:06.165 *********** 2025-06-02 19:43:34.646687 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:34.716135 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:34.749753 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:34.783190 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:34.858832 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:34.860254 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:34.864118 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:34.864159 | orchestrator | 2025-06-02 19:43:34.864172 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-02 19:43:34.864186 | orchestrator | Monday 02 June 2025 19:43:34 +0000 (0:00:00.299) 0:05:06.464 *********** 2025-06-02 19:43:34.932798 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:34.967373 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:43:35.034916 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:43:35.071956 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:43:35.136221 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:43:35.136544 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:43:35.137730 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:43:35.138419 | orchestrator | 2025-06-02 19:43:35.140947 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-02 19:43:35.140972 | orchestrator | Monday 02 June 2025 19:43:35 +0000 (0:00:00.279) 0:05:06.744 *********** 2025-06-02 19:43:35.237285 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:35.267824 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:35.344898 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:35.380340 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:35.454932 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:35.455496 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:35.456136 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:35.457014 | orchestrator | 2025-06-02 19:43:35.457985 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-02 19:43:35.458753 | orchestrator | Monday 02 June 2025 19:43:35 +0000 (0:00:00.317) 0:05:07.061 *********** 2025-06-02 19:43:35.539393 | orchestrator | ok: [testbed-manager] =>  2025-06-02 19:43:35.541933 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:43:35.612897 | orchestrator | ok: [testbed-node-3] =>  2025-06-02 19:43:35.614320 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:43:35.649379 | orchestrator | ok: [testbed-node-4] =>  2025-06-02 19:43:35.650200 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:43:35.683663 | orchestrator | ok: [testbed-node-5] =>  2025-06-02 19:43:35.685904 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:43:35.744563 | orchestrator | ok: [testbed-node-0] =>  2025-06-02 19:43:35.745989 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:43:35.746928 | orchestrator | ok: [testbed-node-1] =>  2025-06-02 19:43:35.747918 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:43:35.748349 | orchestrator | ok: [testbed-node-2] =>  2025-06-02 19:43:35.748792 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 19:43:35.749343 | orchestrator | 2025-06-02 19:43:35.749868 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-02 19:43:35.750562 | orchestrator | Monday 02 June 2025 19:43:35 +0000 (0:00:00.291) 0:05:07.352 *********** 2025-06-02 19:43:35.862191 | orchestrator | ok: [testbed-manager] =>  2025-06-02 19:43:35.862728 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:43:36.000484 | orchestrator | ok: [testbed-node-3] =>  2025-06-02 19:43:36.001093 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:43:36.036299 | orchestrator | ok: [testbed-node-4] =>  2025-06-02 19:43:36.036715 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:43:36.080068 | orchestrator | ok: [testbed-node-5] =>  2025-06-02 19:43:36.080145 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:43:36.146258 | orchestrator | ok: [testbed-node-0] =>  2025-06-02 19:43:36.147485 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:43:36.148956 | orchestrator | ok: [testbed-node-1] =>  2025-06-02 19:43:36.149946 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:43:36.151310 | orchestrator | ok: [testbed-node-2] =>  2025-06-02 19:43:36.152202 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 19:43:36.153817 | orchestrator | 2025-06-02 19:43:36.154728 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-02 19:43:36.155712 | orchestrator | Monday 02 June 2025 19:43:36 +0000 (0:00:00.399) 0:05:07.752 *********** 2025-06-02 19:43:36.243184 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:36.272234 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:43:36.301767 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:43:36.332408 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:43:36.381640 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:43:36.382722 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:43:36.383325 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:43:36.384169 | orchestrator | 2025-06-02 19:43:36.384983 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-02 19:43:36.385909 | orchestrator | Monday 02 June 2025 19:43:36 +0000 (0:00:00.236) 0:05:07.989 *********** 2025-06-02 19:43:36.460654 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:36.493924 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:43:36.533566 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:43:36.568443 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:43:36.602898 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:43:36.668440 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:43:36.668650 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:43:36.670087 | orchestrator | 2025-06-02 19:43:36.670687 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-02 19:43:36.671382 | orchestrator | Monday 02 June 2025 19:43:36 +0000 (0:00:00.285) 0:05:08.274 *********** 2025-06-02 19:43:37.118866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:43:37.119787 | orchestrator | 2025-06-02 19:43:37.120747 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-02 19:43:37.122165 | orchestrator | Monday 02 June 2025 19:43:37 +0000 (0:00:00.444) 0:05:08.719 *********** 2025-06-02 19:43:37.938635 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:37.939906 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:37.941449 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:37.942848 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:37.943995 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:37.945160 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:37.945692 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:37.946202 | orchestrator | 2025-06-02 19:43:37.947053 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-02 19:43:37.947882 | orchestrator | Monday 02 June 2025 19:43:37 +0000 (0:00:00.824) 0:05:09.543 *********** 2025-06-02 19:43:40.704422 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:43:40.705756 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:43:40.707174 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:43:40.708534 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:43:40.709782 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:43:40.710882 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:40.711914 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:43:40.712905 | orchestrator | 2025-06-02 19:43:40.713422 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-02 19:43:40.714447 | orchestrator | Monday 02 June 2025 19:43:40 +0000 (0:00:02.767) 0:05:12.310 *********** 2025-06-02 19:43:40.791505 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-02 19:43:40.793244 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-02 19:43:40.796506 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-02 19:43:40.880075 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-02 19:43:40.880226 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-02 19:43:40.881947 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-02 19:43:40.946169 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:43:40.947009 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-02 19:43:40.948623 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-02 19:43:41.162730 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:43:41.163543 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-02 19:43:41.164558 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-02 19:43:41.165642 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-02 19:43:41.166659 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-02 19:43:41.234105 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:43:41.235282 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-02 19:43:41.236098 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-02 19:43:41.304504 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-02 19:43:41.305559 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:43:41.306652 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-02 19:43:41.307492 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-02 19:43:41.308455 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-02 19:43:41.448228 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:43:41.448724 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:43:41.450316 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-02 19:43:41.451184 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-02 19:43:41.452235 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-02 19:43:41.453533 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:43:41.454692 | orchestrator | 2025-06-02 19:43:41.455643 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-02 19:43:41.456622 | orchestrator | Monday 02 June 2025 19:43:41 +0000 (0:00:00.741) 0:05:13.052 *********** 2025-06-02 19:43:47.177716 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:47.178538 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:47.180292 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:47.182081 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:47.182882 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:47.183214 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:47.183910 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:47.184724 | orchestrator | 2025-06-02 19:43:47.185106 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-02 19:43:47.185842 | orchestrator | Monday 02 June 2025 19:43:47 +0000 (0:00:05.729) 0:05:18.781 *********** 2025-06-02 19:43:48.213278 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:48.214455 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:48.215403 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:48.217570 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:48.218527 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:48.219669 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:48.220325 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:48.221246 | orchestrator | 2025-06-02 19:43:48.221903 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-02 19:43:48.223145 | orchestrator | Monday 02 June 2025 19:43:48 +0000 (0:00:01.036) 0:05:19.818 *********** 2025-06-02 19:43:55.376816 | orchestrator | ok: [testbed-manager] 2025-06-02 19:43:55.376999 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:55.380405 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:55.381911 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:55.382898 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:55.384177 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:55.385604 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:55.386389 | orchestrator | 2025-06-02 19:43:55.387360 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-02 19:43:55.387908 | orchestrator | Monday 02 June 2025 19:43:55 +0000 (0:00:07.164) 0:05:26.982 *********** 2025-06-02 19:43:58.480610 | orchestrator | changed: [testbed-manager] 2025-06-02 19:43:58.481506 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:43:58.482842 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:43:58.484230 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:43:58.485165 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:43:58.485879 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:43:58.486797 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:43:58.487734 | orchestrator | 2025-06-02 19:43:58.488104 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-02 19:43:58.489157 | orchestrator | Monday 02 June 2025 19:43:58 +0000 (0:00:03.100) 0:05:30.083 *********** 2025-06-02 19:44:00.010815 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:00.011542 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:00.014171 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:00.015042 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:00.016331 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:00.017696 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:00.018540 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:00.019519 | orchestrator | 2025-06-02 19:44:00.020436 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-02 19:44:00.021452 | orchestrator | Monday 02 June 2025 19:44:00 +0000 (0:00:01.532) 0:05:31.615 *********** 2025-06-02 19:44:01.312304 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:01.312862 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:01.313150 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:01.314871 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:01.315394 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:01.316252 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:01.317513 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:01.317775 | orchestrator | 2025-06-02 19:44:01.318531 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-02 19:44:01.319132 | orchestrator | Monday 02 June 2025 19:44:01 +0000 (0:00:01.301) 0:05:32.916 *********** 2025-06-02 19:44:01.535216 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:44:01.610337 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:44:01.672660 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:44:01.737468 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:44:01.915652 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:44:01.916634 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:44:01.917713 | orchestrator | changed: [testbed-manager] 2025-06-02 19:44:01.918674 | orchestrator | 2025-06-02 19:44:01.919372 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-02 19:44:01.920329 | orchestrator | Monday 02 June 2025 19:44:01 +0000 (0:00:00.606) 0:05:33.522 *********** 2025-06-02 19:44:11.269234 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:11.269454 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:11.270555 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:11.270947 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:11.273134 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:11.274235 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:11.275120 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:11.275789 | orchestrator | 2025-06-02 19:44:11.276544 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-02 19:44:11.277252 | orchestrator | Monday 02 June 2025 19:44:11 +0000 (0:00:09.350) 0:05:42.873 *********** 2025-06-02 19:44:12.139379 | orchestrator | changed: [testbed-manager] 2025-06-02 19:44:12.140133 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:12.141066 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:12.141845 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:12.142893 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:12.143408 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:12.144990 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:12.145012 | orchestrator | 2025-06-02 19:44:12.145502 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-02 19:44:12.146455 | orchestrator | Monday 02 June 2025 19:44:12 +0000 (0:00:00.871) 0:05:43.745 *********** 2025-06-02 19:44:20.786747 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:20.787471 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:20.789493 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:20.790803 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:20.791450 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:20.792283 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:20.792968 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:20.793758 | orchestrator | 2025-06-02 19:44:20.794314 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-02 19:44:20.795156 | orchestrator | Monday 02 June 2025 19:44:20 +0000 (0:00:08.647) 0:05:52.393 *********** 2025-06-02 19:44:31.082191 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:31.082309 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:31.082392 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:31.084035 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:31.084058 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:31.085618 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:31.087179 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:31.090819 | orchestrator | 2025-06-02 19:44:31.092357 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-02 19:44:31.093752 | orchestrator | Monday 02 June 2025 19:44:31 +0000 (0:00:10.292) 0:06:02.685 *********** 2025-06-02 19:44:31.515023 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-02 19:44:31.515483 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-02 19:44:32.267514 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-02 19:44:32.267747 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-02 19:44:32.269719 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-02 19:44:32.270450 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-02 19:44:32.271685 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-02 19:44:32.272424 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-02 19:44:32.273130 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-02 19:44:32.273864 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-02 19:44:32.274440 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-02 19:44:32.274998 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-02 19:44:32.275666 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-02 19:44:32.276703 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-02 19:44:32.278547 | orchestrator | 2025-06-02 19:44:32.278873 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-02 19:44:32.279599 | orchestrator | Monday 02 June 2025 19:44:32 +0000 (0:00:01.186) 0:06:03.872 *********** 2025-06-02 19:44:32.480232 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:44:32.547889 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:44:32.611458 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:44:32.672446 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:44:32.784266 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:44:32.784937 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:44:32.786708 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:44:32.787599 | orchestrator | 2025-06-02 19:44:32.788496 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-02 19:44:32.789453 | orchestrator | Monday 02 June 2025 19:44:32 +0000 (0:00:00.519) 0:06:04.391 *********** 2025-06-02 19:44:36.668813 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:36.669031 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:36.670293 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:36.671618 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:36.672186 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:36.672949 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:36.674508 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:36.675277 | orchestrator | 2025-06-02 19:44:36.675902 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-02 19:44:36.676780 | orchestrator | Monday 02 June 2025 19:44:36 +0000 (0:00:03.881) 0:06:08.273 *********** 2025-06-02 19:44:36.796711 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:44:36.859630 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:44:36.936745 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:44:37.009150 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:44:37.074278 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:44:37.166254 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:44:37.168187 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:44:37.171058 | orchestrator | 2025-06-02 19:44:37.171089 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-02 19:44:37.171102 | orchestrator | Monday 02 June 2025 19:44:37 +0000 (0:00:00.498) 0:06:08.772 *********** 2025-06-02 19:44:37.241384 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-02 19:44:37.241729 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-02 19:44:37.306494 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:44:37.307477 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-02 19:44:37.376029 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-02 19:44:37.376656 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-02 19:44:37.377149 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-02 19:44:37.450142 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:44:37.450336 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-02 19:44:37.451170 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-02 19:44:37.515408 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:44:37.516758 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-02 19:44:37.519994 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-02 19:44:37.580064 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:44:37.580729 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-02 19:44:37.584034 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-02 19:44:37.684641 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:44:37.686530 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:44:37.687493 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-02 19:44:37.690082 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-02 19:44:37.690725 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:44:37.691622 | orchestrator | 2025-06-02 19:44:37.693037 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-02 19:44:37.693464 | orchestrator | Monday 02 June 2025 19:44:37 +0000 (0:00:00.520) 0:06:09.292 *********** 2025-06-02 19:44:37.810154 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:44:37.877410 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:44:37.939490 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:44:38.000273 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:44:38.068873 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:44:38.152237 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:44:38.153120 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:44:38.153951 | orchestrator | 2025-06-02 19:44:38.156968 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-02 19:44:38.156997 | orchestrator | Monday 02 June 2025 19:44:38 +0000 (0:00:00.465) 0:06:09.757 *********** 2025-06-02 19:44:38.281154 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:44:38.342475 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:44:38.404079 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:44:38.472043 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:44:38.531794 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:44:38.629026 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:44:38.629953 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:44:38.630968 | orchestrator | 2025-06-02 19:44:38.634139 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-02 19:44:38.634175 | orchestrator | Monday 02 June 2025 19:44:38 +0000 (0:00:00.477) 0:06:10.235 *********** 2025-06-02 19:44:38.758283 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:44:38.817321 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:44:39.046252 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:44:39.112471 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:44:39.173866 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:44:39.297297 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:44:39.298213 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:44:39.299637 | orchestrator | 2025-06-02 19:44:39.303161 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-02 19:44:39.303201 | orchestrator | Monday 02 June 2025 19:44:39 +0000 (0:00:00.667) 0:06:10.903 *********** 2025-06-02 19:44:40.964188 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:40.965161 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:40.966439 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:40.967873 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:40.969665 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:40.970395 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:40.971822 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:40.971862 | orchestrator | 2025-06-02 19:44:40.972430 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-02 19:44:40.973193 | orchestrator | Monday 02 June 2025 19:44:40 +0000 (0:00:01.665) 0:06:12.568 *********** 2025-06-02 19:44:41.820430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:44:41.821223 | orchestrator | 2025-06-02 19:44:41.822130 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-02 19:44:41.823117 | orchestrator | Monday 02 June 2025 19:44:41 +0000 (0:00:00.855) 0:06:13.424 *********** 2025-06-02 19:44:42.654697 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:42.654807 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:42.654823 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:42.654898 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:42.655841 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:42.656603 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:42.657297 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:42.658498 | orchestrator | 2025-06-02 19:44:42.658797 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-02 19:44:42.659490 | orchestrator | Monday 02 June 2025 19:44:42 +0000 (0:00:00.831) 0:06:14.256 *********** 2025-06-02 19:44:43.042007 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:43.187415 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:43.680198 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:43.681868 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:43.683060 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:43.684263 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:43.685828 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:43.686399 | orchestrator | 2025-06-02 19:44:43.687287 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-02 19:44:43.688054 | orchestrator | Monday 02 June 2025 19:44:43 +0000 (0:00:01.029) 0:06:15.285 *********** 2025-06-02 19:44:45.024096 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:45.024848 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:45.027051 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:45.027974 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:45.029383 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:45.030177 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:45.031017 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:45.031797 | orchestrator | 2025-06-02 19:44:45.032826 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-02 19:44:45.033416 | orchestrator | Monday 02 June 2025 19:44:45 +0000 (0:00:01.344) 0:06:16.630 *********** 2025-06-02 19:44:45.226429 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:44:46.376983 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:46.377886 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:46.378839 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:46.380167 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:46.380850 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:46.381549 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:46.382171 | orchestrator | 2025-06-02 19:44:46.382826 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-02 19:44:46.383519 | orchestrator | Monday 02 June 2025 19:44:46 +0000 (0:00:01.351) 0:06:17.981 *********** 2025-06-02 19:44:47.740978 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:47.741179 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:47.741637 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:47.742479 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:47.743608 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:47.744795 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:47.745152 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:47.745844 | orchestrator | 2025-06-02 19:44:47.746795 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-02 19:44:47.747204 | orchestrator | Monday 02 June 2025 19:44:47 +0000 (0:00:01.363) 0:06:19.345 *********** 2025-06-02 19:44:49.300140 | orchestrator | changed: [testbed-manager] 2025-06-02 19:44:49.300718 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:49.302610 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:49.303672 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:49.304712 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:49.305459 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:49.306132 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:49.306792 | orchestrator | 2025-06-02 19:44:49.307725 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-02 19:44:49.308303 | orchestrator | Monday 02 June 2025 19:44:49 +0000 (0:00:01.558) 0:06:20.904 *********** 2025-06-02 19:44:50.175258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:44:50.175395 | orchestrator | 2025-06-02 19:44:50.176911 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-02 19:44:50.178464 | orchestrator | Monday 02 June 2025 19:44:50 +0000 (0:00:00.875) 0:06:21.779 *********** 2025-06-02 19:44:51.535445 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:51.535684 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:51.537143 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:51.538203 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:51.540932 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:51.541683 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:51.542013 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:51.543462 | orchestrator | 2025-06-02 19:44:51.544705 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-02 19:44:51.545160 | orchestrator | Monday 02 June 2025 19:44:51 +0000 (0:00:01.361) 0:06:23.141 *********** 2025-06-02 19:44:52.662302 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:52.663155 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:52.663450 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:52.664508 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:52.665287 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:52.666279 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:52.667547 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:52.668302 | orchestrator | 2025-06-02 19:44:52.668811 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-02 19:44:52.669702 | orchestrator | Monday 02 June 2025 19:44:52 +0000 (0:00:01.125) 0:06:24.267 *********** 2025-06-02 19:44:53.990472 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:53.991326 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:53.992685 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:53.993871 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:53.994849 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:53.995829 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:53.996711 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:53.997461 | orchestrator | 2025-06-02 19:44:53.998179 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-02 19:44:53.998992 | orchestrator | Monday 02 June 2025 19:44:53 +0000 (0:00:01.328) 0:06:25.595 *********** 2025-06-02 19:44:55.156535 | orchestrator | ok: [testbed-manager] 2025-06-02 19:44:55.157582 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:44:55.157643 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:44:55.157666 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:44:55.158178 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:55.158747 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:55.159554 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:55.160657 | orchestrator | 2025-06-02 19:44:55.160828 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-02 19:44:55.162840 | orchestrator | Monday 02 June 2025 19:44:55 +0000 (0:00:01.164) 0:06:26.759 *********** 2025-06-02 19:44:56.323756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:44:56.326132 | orchestrator | 2025-06-02 19:44:56.326178 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:44:56.326193 | orchestrator | Monday 02 June 2025 19:44:56 +0000 (0:00:00.885) 0:06:27.645 *********** 2025-06-02 19:44:56.326205 | orchestrator | 2025-06-02 19:44:56.326801 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:44:56.329025 | orchestrator | Monday 02 June 2025 19:44:56 +0000 (0:00:00.038) 0:06:27.683 *********** 2025-06-02 19:44:56.329198 | orchestrator | 2025-06-02 19:44:56.330010 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:44:56.330987 | orchestrator | Monday 02 June 2025 19:44:56 +0000 (0:00:00.044) 0:06:27.728 *********** 2025-06-02 19:44:56.331643 | orchestrator | 2025-06-02 19:44:56.332501 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:44:56.333333 | orchestrator | Monday 02 June 2025 19:44:56 +0000 (0:00:00.040) 0:06:27.768 *********** 2025-06-02 19:44:56.334135 | orchestrator | 2025-06-02 19:44:56.334611 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:44:56.335393 | orchestrator | Monday 02 June 2025 19:44:56 +0000 (0:00:00.038) 0:06:27.806 *********** 2025-06-02 19:44:56.336377 | orchestrator | 2025-06-02 19:44:56.337089 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:44:56.337764 | orchestrator | Monday 02 June 2025 19:44:56 +0000 (0:00:00.044) 0:06:27.851 *********** 2025-06-02 19:44:56.338663 | orchestrator | 2025-06-02 19:44:56.338963 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 19:44:56.340456 | orchestrator | Monday 02 June 2025 19:44:56 +0000 (0:00:00.037) 0:06:27.889 *********** 2025-06-02 19:44:56.341481 | orchestrator | 2025-06-02 19:44:56.343204 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 19:44:56.343761 | orchestrator | Monday 02 June 2025 19:44:56 +0000 (0:00:00.037) 0:06:27.926 *********** 2025-06-02 19:44:57.637399 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:44:57.638274 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:44:57.639028 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:44:57.641358 | orchestrator | 2025-06-02 19:44:57.641417 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-02 19:44:57.641438 | orchestrator | Monday 02 June 2025 19:44:57 +0000 (0:00:01.314) 0:06:29.241 *********** 2025-06-02 19:44:58.930279 | orchestrator | changed: [testbed-manager] 2025-06-02 19:44:58.930411 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:44:58.932789 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:44:58.933337 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:44:58.933821 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:44:58.934638 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:44:58.934991 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:44:58.935673 | orchestrator | 2025-06-02 19:44:58.936413 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-02 19:44:58.936684 | orchestrator | Monday 02 June 2025 19:44:58 +0000 (0:00:01.292) 0:06:30.533 *********** 2025-06-02 19:45:00.064106 | orchestrator | changed: [testbed-manager] 2025-06-02 19:45:00.064222 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:00.064486 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:00.066136 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:00.066167 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:00.066866 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:00.067248 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:00.067750 | orchestrator | 2025-06-02 19:45:00.068278 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-02 19:45:00.068739 | orchestrator | Monday 02 June 2025 19:45:00 +0000 (0:00:01.134) 0:06:31.668 *********** 2025-06-02 19:45:00.195352 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:02.325600 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:02.325773 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:02.326794 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:02.327899 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:02.330516 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:02.331518 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:02.332112 | orchestrator | 2025-06-02 19:45:02.332984 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-02 19:45:02.333436 | orchestrator | Monday 02 June 2025 19:45:02 +0000 (0:00:02.261) 0:06:33.929 *********** 2025-06-02 19:45:02.419815 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:02.420011 | orchestrator | 2025-06-02 19:45:02.420950 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-02 19:45:02.421175 | orchestrator | Monday 02 June 2025 19:45:02 +0000 (0:00:00.098) 0:06:34.027 *********** 2025-06-02 19:45:03.389181 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:03.389952 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:03.390874 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:03.392168 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:03.392510 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:03.393127 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:03.393704 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:03.394385 | orchestrator | 2025-06-02 19:45:03.395048 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-02 19:45:03.395525 | orchestrator | Monday 02 June 2025 19:45:03 +0000 (0:00:00.965) 0:06:34.993 *********** 2025-06-02 19:45:03.729807 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:03.798527 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:03.889553 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:03.985101 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:04.048545 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:04.166301 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:04.167221 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:04.167684 | orchestrator | 2025-06-02 19:45:04.168369 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-02 19:45:04.169168 | orchestrator | Monday 02 June 2025 19:45:04 +0000 (0:00:00.778) 0:06:35.771 *********** 2025-06-02 19:45:05.084764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:45:05.085699 | orchestrator | 2025-06-02 19:45:05.088735 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-02 19:45:05.088763 | orchestrator | Monday 02 June 2025 19:45:05 +0000 (0:00:00.919) 0:06:36.690 *********** 2025-06-02 19:45:05.498993 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:05.891279 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:05.892089 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:05.893418 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:05.894314 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:05.895056 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:05.895675 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:05.896265 | orchestrator | 2025-06-02 19:45:05.896927 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-02 19:45:05.897243 | orchestrator | Monday 02 June 2025 19:45:05 +0000 (0:00:00.807) 0:06:37.498 *********** 2025-06-02 19:45:08.502365 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-02 19:45:08.503382 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-02 19:45:08.504748 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-02 19:45:08.506673 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-02 19:45:08.507072 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-02 19:45:08.508085 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-02 19:45:08.510194 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-02 19:45:08.510897 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-02 19:45:08.511706 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-02 19:45:08.512220 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-02 19:45:08.513373 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-02 19:45:08.514157 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-02 19:45:08.514911 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-02 19:45:08.515772 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-02 19:45:08.516626 | orchestrator | 2025-06-02 19:45:08.516901 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-02 19:45:08.517774 | orchestrator | Monday 02 June 2025 19:45:08 +0000 (0:00:02.607) 0:06:40.105 *********** 2025-06-02 19:45:08.644360 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:08.707927 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:08.777372 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:08.841194 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:08.915711 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:09.011612 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:09.013066 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:09.014624 | orchestrator | 2025-06-02 19:45:09.015188 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-02 19:45:09.016733 | orchestrator | Monday 02 June 2025 19:45:09 +0000 (0:00:00.511) 0:06:40.617 *********** 2025-06-02 19:45:09.794125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:45:09.795238 | orchestrator | 2025-06-02 19:45:09.797261 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-02 19:45:09.798584 | orchestrator | Monday 02 June 2025 19:45:09 +0000 (0:00:00.780) 0:06:41.397 *********** 2025-06-02 19:45:10.349604 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:10.415070 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:10.840114 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:10.840722 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:10.842135 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:10.842932 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:10.844304 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:10.845101 | orchestrator | 2025-06-02 19:45:10.846279 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-02 19:45:10.847243 | orchestrator | Monday 02 June 2025 19:45:10 +0000 (0:00:01.046) 0:06:42.444 *********** 2025-06-02 19:45:11.244517 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:11.629227 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:11.629780 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:11.630843 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:11.631752 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:11.632817 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:11.633791 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:11.634268 | orchestrator | 2025-06-02 19:45:11.635011 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-02 19:45:11.635796 | orchestrator | Monday 02 June 2025 19:45:11 +0000 (0:00:00.789) 0:06:43.233 *********** 2025-06-02 19:45:11.759814 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:11.826635 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:11.882286 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:11.950490 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:12.013389 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:12.123368 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:12.124422 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:12.125427 | orchestrator | 2025-06-02 19:45:12.126246 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-02 19:45:12.127297 | orchestrator | Monday 02 June 2025 19:45:12 +0000 (0:00:00.495) 0:06:43.729 *********** 2025-06-02 19:45:13.553459 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:13.554921 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:13.555001 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:13.556081 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:13.556809 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:13.557426 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:13.558169 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:13.559039 | orchestrator | 2025-06-02 19:45:13.559780 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-02 19:45:13.560331 | orchestrator | Monday 02 June 2025 19:45:13 +0000 (0:00:01.429) 0:06:45.158 *********** 2025-06-02 19:45:13.676934 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:13.744409 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:13.805293 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:13.867661 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:13.933722 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:14.028729 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:14.028921 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:14.030127 | orchestrator | 2025-06-02 19:45:14.030985 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-02 19:45:14.032119 | orchestrator | Monday 02 June 2025 19:45:14 +0000 (0:00:00.475) 0:06:45.633 *********** 2025-06-02 19:45:21.390918 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:21.391113 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:21.391960 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:21.393490 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:21.395152 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:21.396001 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:21.397311 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:21.397579 | orchestrator | 2025-06-02 19:45:21.398012 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-02 19:45:21.399130 | orchestrator | Monday 02 June 2025 19:45:21 +0000 (0:00:07.361) 0:06:52.995 *********** 2025-06-02 19:45:22.665749 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:22.665946 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:22.667419 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:22.668819 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:22.669921 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:22.670659 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:22.671296 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:22.672973 | orchestrator | 2025-06-02 19:45:22.673022 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-02 19:45:22.673715 | orchestrator | Monday 02 June 2025 19:45:22 +0000 (0:00:01.276) 0:06:54.272 *********** 2025-06-02 19:45:24.381042 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:24.381213 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:24.381768 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:24.384783 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:24.385173 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:24.385706 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:24.386275 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:24.386932 | orchestrator | 2025-06-02 19:45:24.387761 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-02 19:45:24.388244 | orchestrator | Monday 02 June 2025 19:45:24 +0000 (0:00:01.713) 0:06:55.985 *********** 2025-06-02 19:45:26.013690 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:26.014828 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:26.017641 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:26.018438 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:26.019952 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:26.020401 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:26.021642 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:26.022756 | orchestrator | 2025-06-02 19:45:26.023026 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 19:45:26.023817 | orchestrator | Monday 02 June 2025 19:45:25 +0000 (0:00:01.626) 0:06:57.611 *********** 2025-06-02 19:45:26.422778 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:27.073423 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:27.073886 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:27.074074 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:27.075262 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:27.075296 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:27.075676 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:27.075780 | orchestrator | 2025-06-02 19:45:27.077172 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 19:45:27.077442 | orchestrator | Monday 02 June 2025 19:45:27 +0000 (0:00:01.065) 0:06:58.677 *********** 2025-06-02 19:45:27.205792 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:27.292770 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:27.355052 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:27.419523 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:27.488477 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:27.867496 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:27.868465 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:27.869166 | orchestrator | 2025-06-02 19:45:27.869837 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-02 19:45:27.871759 | orchestrator | Monday 02 June 2025 19:45:27 +0000 (0:00:00.796) 0:06:59.473 *********** 2025-06-02 19:45:28.010331 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:28.074444 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:28.146263 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:28.209517 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:28.272196 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:28.375336 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:28.375521 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:28.376747 | orchestrator | 2025-06-02 19:45:28.377453 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-02 19:45:28.378732 | orchestrator | Monday 02 June 2025 19:45:28 +0000 (0:00:00.507) 0:06:59.981 *********** 2025-06-02 19:45:28.505910 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:28.579404 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:28.646645 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:28.712607 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:28.964597 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:29.085111 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:29.085809 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:29.086332 | orchestrator | 2025-06-02 19:45:29.087245 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-02 19:45:29.088287 | orchestrator | Monday 02 June 2025 19:45:29 +0000 (0:00:00.706) 0:07:00.688 *********** 2025-06-02 19:45:29.218256 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:29.298288 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:29.363317 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:29.430814 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:29.492709 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:29.598331 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:29.598480 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:29.599758 | orchestrator | 2025-06-02 19:45:29.600733 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-02 19:45:29.602270 | orchestrator | Monday 02 June 2025 19:45:29 +0000 (0:00:00.514) 0:07:01.202 *********** 2025-06-02 19:45:29.728716 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:29.791361 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:29.856750 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:29.920961 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:29.981289 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:30.083551 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:30.083833 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:30.084849 | orchestrator | 2025-06-02 19:45:30.085899 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-02 19:45:30.087170 | orchestrator | Monday 02 June 2025 19:45:30 +0000 (0:00:00.486) 0:07:01.689 *********** 2025-06-02 19:45:35.653439 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:35.653620 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:35.656862 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:35.658013 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:35.659536 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:35.662056 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:35.663492 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:35.664632 | orchestrator | 2025-06-02 19:45:35.665008 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-02 19:45:35.665687 | orchestrator | Monday 02 June 2025 19:45:35 +0000 (0:00:05.564) 0:07:07.254 *********** 2025-06-02 19:45:35.785747 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:35.847440 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:35.912916 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:35.984650 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:36.046761 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:36.175206 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:36.177482 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:36.178270 | orchestrator | 2025-06-02 19:45:36.179404 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-02 19:45:36.180115 | orchestrator | Monday 02 June 2025 19:45:36 +0000 (0:00:00.525) 0:07:07.779 *********** 2025-06-02 19:45:37.259534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:45:37.259722 | orchestrator | 2025-06-02 19:45:37.260451 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-02 19:45:37.260989 | orchestrator | Monday 02 June 2025 19:45:37 +0000 (0:00:01.086) 0:07:08.866 *********** 2025-06-02 19:45:39.034855 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:39.035739 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:39.037679 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:39.038281 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:39.040696 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:39.041710 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:39.042794 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:39.043667 | orchestrator | 2025-06-02 19:45:39.045016 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-02 19:45:39.045659 | orchestrator | Monday 02 June 2025 19:45:39 +0000 (0:00:01.772) 0:07:10.638 *********** 2025-06-02 19:45:40.161180 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:40.161363 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:40.162157 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:40.163136 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:40.163942 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:40.164396 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:40.165010 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:40.165535 | orchestrator | 2025-06-02 19:45:40.166241 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-02 19:45:40.166543 | orchestrator | Monday 02 June 2025 19:45:40 +0000 (0:00:01.128) 0:07:11.767 *********** 2025-06-02 19:45:40.803697 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:41.213162 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:41.214690 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:41.215838 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:41.216764 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:41.217598 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:41.218473 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:41.219262 | orchestrator | 2025-06-02 19:45:41.220134 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-02 19:45:41.220978 | orchestrator | Monday 02 June 2025 19:45:41 +0000 (0:00:01.050) 0:07:12.817 *********** 2025-06-02 19:45:42.952254 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:45:42.956425 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:45:42.957159 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:45:42.958126 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:45:42.958733 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:45:42.959528 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:45:42.960364 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 19:45:42.961202 | orchestrator | 2025-06-02 19:45:42.961929 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-02 19:45:42.962973 | orchestrator | Monday 02 June 2025 19:45:42 +0000 (0:00:01.738) 0:07:14.556 *********** 2025-06-02 19:45:43.757242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:45:43.757345 | orchestrator | 2025-06-02 19:45:43.758383 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-02 19:45:43.759299 | orchestrator | Monday 02 June 2025 19:45:43 +0000 (0:00:00.805) 0:07:15.361 *********** 2025-06-02 19:45:52.309958 | orchestrator | changed: [testbed-manager] 2025-06-02 19:45:52.310267 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:52.311092 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:52.313206 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:52.314478 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:52.314674 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:52.315427 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:52.315950 | orchestrator | 2025-06-02 19:45:52.316983 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-02 19:45:52.317479 | orchestrator | Monday 02 June 2025 19:45:52 +0000 (0:00:08.551) 0:07:23.913 *********** 2025-06-02 19:45:54.099141 | orchestrator | ok: [testbed-manager] 2025-06-02 19:45:54.099332 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:54.099418 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:54.099760 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:54.100026 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:54.100491 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:54.100920 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:54.101184 | orchestrator | 2025-06-02 19:45:54.101719 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-02 19:45:54.103072 | orchestrator | Monday 02 June 2025 19:45:54 +0000 (0:00:01.787) 0:07:25.700 *********** 2025-06-02 19:45:55.423143 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:45:55.423234 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:45:55.423663 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:45:55.424457 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:45:55.425900 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:45:55.426291 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:45:55.427296 | orchestrator | 2025-06-02 19:45:55.429174 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-02 19:45:55.429876 | orchestrator | Monday 02 June 2025 19:45:55 +0000 (0:00:01.326) 0:07:27.027 *********** 2025-06-02 19:45:56.891833 | orchestrator | changed: [testbed-manager] 2025-06-02 19:45:56.893778 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:56.893985 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:56.895771 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:56.897499 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:56.898643 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:56.900424 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:56.901401 | orchestrator | 2025-06-02 19:45:56.902911 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-02 19:45:56.903048 | orchestrator | 2025-06-02 19:45:56.905532 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-02 19:45:56.905850 | orchestrator | Monday 02 June 2025 19:45:56 +0000 (0:00:01.469) 0:07:28.496 *********** 2025-06-02 19:45:57.048215 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:45:57.117031 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:45:57.182801 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:45:57.259283 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:45:57.325291 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:45:57.449684 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:45:57.450953 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:45:57.452404 | orchestrator | 2025-06-02 19:45:57.453170 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-02 19:45:57.453864 | orchestrator | 2025-06-02 19:45:57.455002 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-02 19:45:57.456342 | orchestrator | Monday 02 June 2025 19:45:57 +0000 (0:00:00.560) 0:07:29.057 *********** 2025-06-02 19:45:58.774196 | orchestrator | changed: [testbed-manager] 2025-06-02 19:45:58.775711 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:45:58.778114 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:45:58.778646 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:45:58.779871 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:45:58.780455 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:45:58.781059 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:45:58.781420 | orchestrator | 2025-06-02 19:45:58.781950 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-02 19:45:58.782739 | orchestrator | Monday 02 June 2025 19:45:58 +0000 (0:00:01.322) 0:07:30.379 *********** 2025-06-02 19:46:00.190949 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:00.191811 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:00.192228 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:00.193388 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:00.194659 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:00.194851 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:00.195847 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:00.196375 | orchestrator | 2025-06-02 19:46:00.197422 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-02 19:46:00.198186 | orchestrator | Monday 02 June 2025 19:46:00 +0000 (0:00:01.416) 0:07:31.796 *********** 2025-06-02 19:46:00.516053 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:00.580694 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:00.651221 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:00.714510 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:00.775004 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:01.175665 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:01.175884 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:01.175904 | orchestrator | 2025-06-02 19:46:01.180707 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-02 19:46:01.180791 | orchestrator | Monday 02 June 2025 19:46:01 +0000 (0:00:00.984) 0:07:32.780 *********** 2025-06-02 19:46:02.382837 | orchestrator | changed: [testbed-manager] 2025-06-02 19:46:02.384347 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:02.385904 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:02.386140 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:02.387589 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:02.387621 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:02.388269 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:02.388736 | orchestrator | 2025-06-02 19:46:02.389258 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-02 19:46:02.390131 | orchestrator | 2025-06-02 19:46:02.391065 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-02 19:46:02.391317 | orchestrator | Monday 02 June 2025 19:46:02 +0000 (0:00:01.205) 0:07:33.986 *********** 2025-06-02 19:46:03.341632 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:46:03.343459 | orchestrator | 2025-06-02 19:46:03.343511 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 19:46:03.344818 | orchestrator | Monday 02 June 2025 19:46:03 +0000 (0:00:00.957) 0:07:34.944 *********** 2025-06-02 19:46:04.158803 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:04.158939 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:04.160520 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:04.161375 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:04.162108 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:04.162771 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:04.166410 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:04.166452 | orchestrator | 2025-06-02 19:46:04.168152 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 19:46:04.168189 | orchestrator | Monday 02 June 2025 19:46:04 +0000 (0:00:00.810) 0:07:35.755 *********** 2025-06-02 19:46:05.369843 | orchestrator | changed: [testbed-manager] 2025-06-02 19:46:05.370595 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:05.372287 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:05.372781 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:05.373706 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:05.378290 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:05.378337 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:05.378349 | orchestrator | 2025-06-02 19:46:05.378362 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-02 19:46:05.378374 | orchestrator | Monday 02 June 2025 19:46:05 +0000 (0:00:01.219) 0:07:36.974 *********** 2025-06-02 19:46:06.364449 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:46:06.366094 | orchestrator | 2025-06-02 19:46:06.366833 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 19:46:06.369533 | orchestrator | Monday 02 June 2025 19:46:06 +0000 (0:00:00.992) 0:07:37.967 *********** 2025-06-02 19:46:06.778303 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:07.243934 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:07.246194 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:07.246304 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:07.247218 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:07.248880 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:07.249903 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:07.250874 | orchestrator | 2025-06-02 19:46:07.252535 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 19:46:07.253044 | orchestrator | Monday 02 June 2025 19:46:07 +0000 (0:00:00.877) 0:07:38.845 *********** 2025-06-02 19:46:07.649636 | orchestrator | changed: [testbed-manager] 2025-06-02 19:46:08.306246 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:08.306400 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:08.306481 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:08.308678 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:08.309597 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:08.310427 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:08.311633 | orchestrator | 2025-06-02 19:46:08.312763 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:46:08.313257 | orchestrator | 2025-06-02 19:46:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:46:08.313283 | orchestrator | 2025-06-02 19:46:08 | INFO  | Please wait and do not abort execution. 2025-06-02 19:46:08.313619 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-02 19:46:08.314905 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 19:46:08.314973 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 19:46:08.315257 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 19:46:08.315810 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-02 19:46:08.316409 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 19:46:08.317373 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 19:46:08.317395 | orchestrator | 2025-06-02 19:46:08.318125 | orchestrator | 2025-06-02 19:46:08.318649 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:46:08.319052 | orchestrator | Monday 02 June 2025 19:46:08 +0000 (0:00:01.062) 0:07:39.907 *********** 2025-06-02 19:46:08.319775 | orchestrator | =============================================================================== 2025-06-02 19:46:08.319995 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.59s 2025-06-02 19:46:08.320694 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.17s 2025-06-02 19:46:08.321061 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.99s 2025-06-02 19:46:08.321568 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.55s 2025-06-02 19:46:08.322169 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.41s 2025-06-02 19:46:08.322812 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.24s 2025-06-02 19:46:08.323253 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.29s 2025-06-02 19:46:08.323841 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.35s 2025-06-02 19:46:08.324226 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.65s 2025-06-02 19:46:08.324883 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.55s 2025-06-02 19:46:08.325381 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.40s 2025-06-02 19:46:08.325614 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.94s 2025-06-02 19:46:08.326006 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.79s 2025-06-02 19:46:08.326486 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.36s 2025-06-02 19:46:08.326976 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.16s 2025-06-02 19:46:08.327365 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.07s 2025-06-02 19:46:08.328055 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.77s 2025-06-02 19:46:08.328219 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.73s 2025-06-02 19:46:08.328477 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.69s 2025-06-02 19:46:08.328849 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.56s 2025-06-02 19:46:08.985734 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 19:46:08.985842 | orchestrator | + osism apply network 2025-06-02 19:46:11.045481 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:46:11.045658 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:46:11.045677 | orchestrator | Registering Redlock._release_script 2025-06-02 19:46:11.107310 | orchestrator | 2025-06-02 19:46:11 | INFO  | Task 486b9a62-2f89-4fe7-b550-bd7e68d35b6b (network) was prepared for execution. 2025-06-02 19:46:11.107430 | orchestrator | 2025-06-02 19:46:11 | INFO  | It takes a moment until task 486b9a62-2f89-4fe7-b550-bd7e68d35b6b (network) has been started and output is visible here. 2025-06-02 19:46:15.360187 | orchestrator | 2025-06-02 19:46:15.363108 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-02 19:46:15.363141 | orchestrator | 2025-06-02 19:46:15.363973 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-02 19:46:15.365637 | orchestrator | Monday 02 June 2025 19:46:15 +0000 (0:00:00.331) 0:00:00.331 *********** 2025-06-02 19:46:15.508115 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:15.585357 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:15.661805 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:15.741900 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:15.920786 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:16.057660 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:16.057901 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:16.058321 | orchestrator | 2025-06-02 19:46:16.059102 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-02 19:46:16.059783 | orchestrator | Monday 02 June 2025 19:46:16 +0000 (0:00:00.697) 0:00:01.029 *********** 2025-06-02 19:46:17.242261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:46:17.244346 | orchestrator | 2025-06-02 19:46:17.245385 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-02 19:46:17.246409 | orchestrator | Monday 02 June 2025 19:46:17 +0000 (0:00:01.181) 0:00:02.210 *********** 2025-06-02 19:46:19.251531 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:19.255668 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:19.259873 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:19.262165 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:19.263628 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:19.263946 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:19.265002 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:19.265999 | orchestrator | 2025-06-02 19:46:19.266934 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-02 19:46:19.267895 | orchestrator | Monday 02 June 2025 19:46:19 +0000 (0:00:02.013) 0:00:04.224 *********** 2025-06-02 19:46:21.202323 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:21.202464 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:21.203753 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:21.204484 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:21.205413 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:21.209253 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:21.210061 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:21.210915 | orchestrator | 2025-06-02 19:46:21.211656 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-02 19:46:21.212534 | orchestrator | Monday 02 June 2025 19:46:21 +0000 (0:00:01.949) 0:00:06.174 *********** 2025-06-02 19:46:21.732157 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-02 19:46:21.732233 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-02 19:46:21.732238 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-02 19:46:22.172191 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-02 19:46:22.172892 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-02 19:46:22.173785 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-02 19:46:22.174864 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-02 19:46:22.175886 | orchestrator | 2025-06-02 19:46:22.176871 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-02 19:46:22.178302 | orchestrator | Monday 02 June 2025 19:46:22 +0000 (0:00:00.972) 0:00:07.147 *********** 2025-06-02 19:46:25.402116 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 19:46:25.403484 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 19:46:25.408438 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 19:46:25.410837 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 19:46:25.412400 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 19:46:25.413674 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 19:46:25.415055 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 19:46:25.417600 | orchestrator | 2025-06-02 19:46:25.418726 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-02 19:46:25.419910 | orchestrator | Monday 02 June 2025 19:46:25 +0000 (0:00:03.224) 0:00:10.372 *********** 2025-06-02 19:46:26.839875 | orchestrator | changed: [testbed-manager] 2025-06-02 19:46:26.840045 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:26.844710 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:26.845383 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:26.846276 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:26.847304 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:26.848021 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:26.849155 | orchestrator | 2025-06-02 19:46:26.852842 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-02 19:46:26.853530 | orchestrator | Monday 02 June 2025 19:46:26 +0000 (0:00:01.440) 0:00:11.812 *********** 2025-06-02 19:46:28.627127 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 19:46:28.628157 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 19:46:28.629018 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 19:46:28.630448 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 19:46:28.632474 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 19:46:28.633357 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 19:46:28.634439 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 19:46:28.634930 | orchestrator | 2025-06-02 19:46:28.635459 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-02 19:46:28.636256 | orchestrator | Monday 02 June 2025 19:46:28 +0000 (0:00:01.788) 0:00:13.601 *********** 2025-06-02 19:46:29.058760 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:29.359253 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:29.813397 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:29.813980 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:29.816806 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:29.817789 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:29.818684 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:29.819600 | orchestrator | 2025-06-02 19:46:29.820292 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-02 19:46:29.820781 | orchestrator | Monday 02 June 2025 19:46:29 +0000 (0:00:01.182) 0:00:14.783 *********** 2025-06-02 19:46:29.996050 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:30.078880 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:30.162218 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:30.242432 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:30.324717 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:30.468934 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:30.469410 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:30.470274 | orchestrator | 2025-06-02 19:46:30.470978 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-02 19:46:30.474831 | orchestrator | Monday 02 June 2025 19:46:30 +0000 (0:00:00.660) 0:00:15.443 *********** 2025-06-02 19:46:32.681016 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:32.683924 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:32.683982 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:32.683994 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:32.685429 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:32.686658 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:32.687643 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:32.688430 | orchestrator | 2025-06-02 19:46:32.691924 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-02 19:46:32.692058 | orchestrator | Monday 02 June 2025 19:46:32 +0000 (0:00:02.206) 0:00:17.650 *********** 2025-06-02 19:46:32.936829 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:33.021240 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:33.105642 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:33.184527 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:33.537779 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:33.538871 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:33.539997 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-02 19:46:33.541979 | orchestrator | 2025-06-02 19:46:33.543082 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-02 19:46:33.544003 | orchestrator | Monday 02 June 2025 19:46:33 +0000 (0:00:00.860) 0:00:18.511 *********** 2025-06-02 19:46:35.311118 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:35.311299 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:46:35.311995 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:46:35.313331 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:46:35.314964 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:46:35.315999 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:46:35.317679 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:46:35.319395 | orchestrator | 2025-06-02 19:46:35.320267 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-02 19:46:35.320851 | orchestrator | Monday 02 June 2025 19:46:35 +0000 (0:00:01.769) 0:00:20.281 *********** 2025-06-02 19:46:36.554284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:46:36.557491 | orchestrator | 2025-06-02 19:46:36.558174 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 19:46:36.559373 | orchestrator | Monday 02 June 2025 19:46:36 +0000 (0:00:01.242) 0:00:21.524 *********** 2025-06-02 19:46:37.123457 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:37.712273 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:37.714114 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:37.717583 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:37.717656 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:37.717671 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:37.718117 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:37.719136 | orchestrator | 2025-06-02 19:46:37.720016 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-02 19:46:37.720380 | orchestrator | Monday 02 June 2025 19:46:37 +0000 (0:00:01.157) 0:00:22.682 *********** 2025-06-02 19:46:37.881119 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:37.968149 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:38.054331 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:38.138529 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:38.230145 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:38.372903 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:38.372989 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:38.374109 | orchestrator | 2025-06-02 19:46:38.375333 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 19:46:38.376666 | orchestrator | Monday 02 June 2025 19:46:38 +0000 (0:00:00.660) 0:00:23.342 *********** 2025-06-02 19:46:38.791142 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:46:38.791280 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:46:39.115754 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:46:39.120046 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:46:39.120085 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:46:39.120128 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:46:39.120139 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:46:39.120150 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:46:39.120161 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:46:39.120171 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:46:39.586480 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:46:39.587295 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:46:39.588527 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 19:46:39.589312 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 19:46:39.589840 | orchestrator | 2025-06-02 19:46:39.590615 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-02 19:46:39.591264 | orchestrator | Monday 02 June 2025 19:46:39 +0000 (0:00:01.214) 0:00:24.556 *********** 2025-06-02 19:46:39.750247 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:39.835888 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:39.921194 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:39.999020 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:40.079141 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:40.200164 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:40.200321 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:40.200422 | orchestrator | 2025-06-02 19:46:40.200704 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-02 19:46:40.201325 | orchestrator | Monday 02 June 2025 19:46:40 +0000 (0:00:00.619) 0:00:25.176 *********** 2025-06-02 19:46:43.586011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-2, testbed-node-5, testbed-node-3 2025-06-02 19:46:43.587057 | orchestrator | 2025-06-02 19:46:43.587655 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-02 19:46:43.588519 | orchestrator | Monday 02 June 2025 19:46:43 +0000 (0:00:03.379) 0:00:28.555 *********** 2025-06-02 19:46:48.399282 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:48.403938 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:48.404627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:48.405276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:48.406139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:48.406661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:48.407993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:48.410619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:48.411352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:48.411971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:48.412869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:48.413381 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:48.413864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:48.414291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:48.414963 | orchestrator | 2025-06-02 19:46:48.415552 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-02 19:46:48.417584 | orchestrator | Monday 02 June 2025 19:46:48 +0000 (0:00:04.811) 0:00:33.367 *********** 2025-06-02 19:46:52.940886 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:52.941775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:52.942746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:52.943322 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:52.945482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:52.946869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:52.947715 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:52.949010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 19:46:52.949303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:52.950652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:52.950795 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:52.951738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:52.952424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:52.953268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 19:46:52.953909 | orchestrator | 2025-06-02 19:46:52.954660 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-02 19:46:52.955055 | orchestrator | Monday 02 June 2025 19:46:52 +0000 (0:00:04.546) 0:00:37.914 *********** 2025-06-02 19:46:54.195853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:46:54.196032 | orchestrator | 2025-06-02 19:46:54.197149 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 19:46:54.197305 | orchestrator | Monday 02 June 2025 19:46:54 +0000 (0:00:01.252) 0:00:39.167 *********** 2025-06-02 19:46:54.652023 | orchestrator | ok: [testbed-manager] 2025-06-02 19:46:54.923876 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:46:55.358984 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:46:55.360652 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:46:55.361698 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:46:55.362878 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:46:55.363959 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:46:55.365296 | orchestrator | 2025-06-02 19:46:55.366809 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 19:46:55.367745 | orchestrator | Monday 02 June 2025 19:46:55 +0000 (0:00:01.165) 0:00:40.332 *********** 2025-06-02 19:46:55.446237 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:46:55.446865 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:46:55.448087 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:46:55.558861 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:46:55.558958 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:46:55.559695 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:46:55.561079 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:46:55.561504 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:46:55.644300 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:55.644863 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:46:55.646386 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:46:55.646628 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:46:55.736988 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:55.737452 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:46:55.738835 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:46:55.739620 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:46:55.741685 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:46:55.741709 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:46:55.829079 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:55.829180 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:46:55.830011 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:46:55.830371 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:46:56.114207 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:56.114465 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:46:56.116226 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:46:56.119268 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:46:56.119655 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:46:56.120086 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:46:57.378197 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:57.379467 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:57.380965 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 19:46:57.382183 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 19:46:57.383722 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 19:46:57.384603 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 19:46:57.385639 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:57.386579 | orchestrator | 2025-06-02 19:46:57.387325 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-02 19:46:57.387778 | orchestrator | Monday 02 June 2025 19:46:57 +0000 (0:00:02.016) 0:00:42.349 *********** 2025-06-02 19:46:57.541923 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:57.622259 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:57.701442 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:57.784775 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:57.868903 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:57.987726 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:57.988305 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:57.989790 | orchestrator | 2025-06-02 19:46:57.991195 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-02 19:46:57.992427 | orchestrator | Monday 02 June 2025 19:46:57 +0000 (0:00:00.611) 0:00:42.961 *********** 2025-06-02 19:46:58.148187 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:46:58.400923 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:46:58.479847 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:46:58.566263 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:46:58.648094 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:46:58.689260 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:46:58.690604 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:46:58.692299 | orchestrator | 2025-06-02 19:46:58.693850 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:46:58.693912 | orchestrator | 2025-06-02 19:46:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:46:58.694007 | orchestrator | 2025-06-02 19:46:58 | INFO  | Please wait and do not abort execution. 2025-06-02 19:46:58.695368 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:46:58.696022 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:46:58.696841 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:46:58.697284 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:46:58.698524 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:46:58.699377 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:46:58.700415 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 19:46:58.700834 | orchestrator | 2025-06-02 19:46:58.701307 | orchestrator | 2025-06-02 19:46:58.701826 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:46:58.702671 | orchestrator | Monday 02 June 2025 19:46:58 +0000 (0:00:00.702) 0:00:43.664 *********** 2025-06-02 19:46:58.703704 | orchestrator | =============================================================================== 2025-06-02 19:46:58.704774 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.81s 2025-06-02 19:46:58.704893 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.55s 2025-06-02 19:46:58.705976 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.38s 2025-06-02 19:46:58.706583 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.22s 2025-06-02 19:46:58.707006 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.21s 2025-06-02 19:46:58.707924 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.02s 2025-06-02 19:46:58.708381 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.01s 2025-06-02 19:46:58.709014 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.95s 2025-06-02 19:46:58.709774 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.79s 2025-06-02 19:46:58.710232 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.77s 2025-06-02 19:46:58.711062 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.44s 2025-06-02 19:46:58.711740 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.25s 2025-06-02 19:46:58.712627 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.24s 2025-06-02 19:46:58.713222 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.21s 2025-06-02 19:46:58.713909 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.18s 2025-06-02 19:46:58.714847 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2025-06-02 19:46:58.715581 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.17s 2025-06-02 19:46:58.716305 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2025-06-02 19:46:58.716666 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2025-06-02 19:46:58.717471 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.86s 2025-06-02 19:46:59.293705 | orchestrator | + osism apply wireguard 2025-06-02 19:47:00.943275 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:47:00.943380 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:47:00.943459 | orchestrator | Registering Redlock._release_script 2025-06-02 19:47:01.013680 | orchestrator | 2025-06-02 19:47:01 | INFO  | Task a37cfb8e-c16f-4a14-abb0-fc00e34d7f18 (wireguard) was prepared for execution. 2025-06-02 19:47:01.013765 | orchestrator | 2025-06-02 19:47:01 | INFO  | It takes a moment until task a37cfb8e-c16f-4a14-abb0-fc00e34d7f18 (wireguard) has been started and output is visible here. 2025-06-02 19:47:05.010398 | orchestrator | 2025-06-02 19:47:05.013005 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-02 19:47:05.014966 | orchestrator | 2025-06-02 19:47:05.015851 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-02 19:47:05.016649 | orchestrator | Monday 02 June 2025 19:47:04 +0000 (0:00:00.219) 0:00:00.219 *********** 2025-06-02 19:47:06.489216 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:06.490503 | orchestrator | 2025-06-02 19:47:06.491285 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-02 19:47:06.491838 | orchestrator | Monday 02 June 2025 19:47:06 +0000 (0:00:01.480) 0:00:01.699 *********** 2025-06-02 19:47:12.854255 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:12.854827 | orchestrator | 2025-06-02 19:47:12.855508 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-02 19:47:12.856285 | orchestrator | Monday 02 June 2025 19:47:12 +0000 (0:00:06.366) 0:00:08.066 *********** 2025-06-02 19:47:13.404123 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:13.404287 | orchestrator | 2025-06-02 19:47:13.405328 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-02 19:47:13.406270 | orchestrator | Monday 02 June 2025 19:47:13 +0000 (0:00:00.547) 0:00:08.614 *********** 2025-06-02 19:47:13.818180 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:13.818736 | orchestrator | 2025-06-02 19:47:13.819677 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-02 19:47:13.820782 | orchestrator | Monday 02 June 2025 19:47:13 +0000 (0:00:00.415) 0:00:09.029 *********** 2025-06-02 19:47:14.328026 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:14.329572 | orchestrator | 2025-06-02 19:47:14.330510 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-02 19:47:14.331732 | orchestrator | Monday 02 June 2025 19:47:14 +0000 (0:00:00.508) 0:00:09.538 *********** 2025-06-02 19:47:14.828770 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:14.829195 | orchestrator | 2025-06-02 19:47:14.829919 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-02 19:47:14.830723 | orchestrator | Monday 02 June 2025 19:47:14 +0000 (0:00:00.502) 0:00:10.041 *********** 2025-06-02 19:47:15.238351 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:15.238886 | orchestrator | 2025-06-02 19:47:15.239457 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-02 19:47:15.239891 | orchestrator | Monday 02 June 2025 19:47:15 +0000 (0:00:00.407) 0:00:10.449 *********** 2025-06-02 19:47:16.399104 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:16.399309 | orchestrator | 2025-06-02 19:47:16.400350 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-02 19:47:16.401304 | orchestrator | Monday 02 June 2025 19:47:16 +0000 (0:00:01.159) 0:00:11.608 *********** 2025-06-02 19:47:17.353310 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 19:47:17.353814 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:17.354767 | orchestrator | 2025-06-02 19:47:17.356074 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-02 19:47:17.356098 | orchestrator | Monday 02 June 2025 19:47:17 +0000 (0:00:00.955) 0:00:12.564 *********** 2025-06-02 19:47:18.999856 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:18.999974 | orchestrator | 2025-06-02 19:47:18.999991 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-02 19:47:19.000065 | orchestrator | Monday 02 June 2025 19:47:18 +0000 (0:00:01.643) 0:00:14.208 *********** 2025-06-02 19:47:19.939233 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:19.939929 | orchestrator | 2025-06-02 19:47:19.941156 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:47:19.941466 | orchestrator | 2025-06-02 19:47:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:47:19.941747 | orchestrator | 2025-06-02 19:47:19 | INFO  | Please wait and do not abort execution. 2025-06-02 19:47:19.943055 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:47:19.943642 | orchestrator | 2025-06-02 19:47:19.944142 | orchestrator | 2025-06-02 19:47:19.944703 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:47:19.945565 | orchestrator | Monday 02 June 2025 19:47:19 +0000 (0:00:00.941) 0:00:15.149 *********** 2025-06-02 19:47:19.946161 | orchestrator | =============================================================================== 2025-06-02 19:47:19.946874 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.37s 2025-06-02 19:47:19.947575 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.64s 2025-06-02 19:47:19.948209 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.48s 2025-06-02 19:47:19.948698 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.16s 2025-06-02 19:47:19.949119 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2025-06-02 19:47:19.949911 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-06-02 19:47:19.950297 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2025-06-02 19:47:19.951367 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-06-02 19:47:19.952175 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.50s 2025-06-02 19:47:19.952722 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-06-02 19:47:19.953405 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-06-02 19:47:20.543390 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-02 19:47:20.577725 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-02 19:47:20.577870 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-02 19:47:20.663312 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 174 0 --:--:-- --:--:-- --:--:-- 176 2025-06-02 19:47:20.680200 | orchestrator | + osism apply --environment custom workarounds 2025-06-02 19:47:22.443802 | orchestrator | 2025-06-02 19:47:22 | INFO  | Trying to run play workarounds in environment custom 2025-06-02 19:47:22.448080 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:47:22.448129 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:47:22.448142 | orchestrator | Registering Redlock._release_script 2025-06-02 19:47:22.508967 | orchestrator | 2025-06-02 19:47:22 | INFO  | Task 8b4545bf-eb44-4317-981e-3d60fb8de3f1 (workarounds) was prepared for execution. 2025-06-02 19:47:22.509063 | orchestrator | 2025-06-02 19:47:22 | INFO  | It takes a moment until task 8b4545bf-eb44-4317-981e-3d60fb8de3f1 (workarounds) has been started and output is visible here. 2025-06-02 19:47:26.382150 | orchestrator | 2025-06-02 19:47:26.384770 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 19:47:26.386479 | orchestrator | 2025-06-02 19:47:26.387866 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-02 19:47:26.388919 | orchestrator | Monday 02 June 2025 19:47:26 +0000 (0:00:00.144) 0:00:00.144 *********** 2025-06-02 19:47:26.559796 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-02 19:47:26.675494 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-02 19:47:26.757962 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-02 19:47:26.840008 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-02 19:47:27.015871 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-02 19:47:27.176751 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-02 19:47:27.176849 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-02 19:47:27.176920 | orchestrator | 2025-06-02 19:47:27.177468 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-02 19:47:27.178492 | orchestrator | 2025-06-02 19:47:27.179877 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 19:47:27.179939 | orchestrator | Monday 02 June 2025 19:47:27 +0000 (0:00:00.798) 0:00:00.942 *********** 2025-06-02 19:47:29.555784 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:29.557311 | orchestrator | 2025-06-02 19:47:29.560421 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-02 19:47:29.564007 | orchestrator | 2025-06-02 19:47:29.564359 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 19:47:29.564873 | orchestrator | Monday 02 June 2025 19:47:29 +0000 (0:00:02.375) 0:00:03.318 *********** 2025-06-02 19:47:31.385758 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:31.386211 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:31.387489 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:31.387538 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:31.388280 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:31.389023 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:31.389849 | orchestrator | 2025-06-02 19:47:31.390561 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-02 19:47:31.391281 | orchestrator | 2025-06-02 19:47:31.392079 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-02 19:47:31.392607 | orchestrator | Monday 02 June 2025 19:47:31 +0000 (0:00:01.829) 0:00:05.147 *********** 2025-06-02 19:47:32.856889 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:47:32.857952 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:47:32.859548 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:47:32.860255 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:47:32.861256 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:47:32.861739 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 19:47:32.862505 | orchestrator | 2025-06-02 19:47:32.863042 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-02 19:47:32.863871 | orchestrator | Monday 02 June 2025 19:47:32 +0000 (0:00:01.471) 0:00:06.618 *********** 2025-06-02 19:47:36.682277 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:36.685723 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:36.686565 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:36.686694 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:36.686778 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:36.687179 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:36.687643 | orchestrator | 2025-06-02 19:47:36.688194 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-02 19:47:36.688990 | orchestrator | Monday 02 June 2025 19:47:36 +0000 (0:00:03.826) 0:00:10.445 *********** 2025-06-02 19:47:36.845906 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:47:36.926067 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:47:37.005621 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:47:37.083954 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:37.399000 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:47:37.399935 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:47:37.401166 | orchestrator | 2025-06-02 19:47:37.403277 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-02 19:47:37.403613 | orchestrator | 2025-06-02 19:47:37.404428 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-02 19:47:37.405410 | orchestrator | Monday 02 June 2025 19:47:37 +0000 (0:00:00.717) 0:00:11.162 *********** 2025-06-02 19:47:39.001057 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:39.004747 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:39.004806 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:39.007375 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:39.008908 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:39.009308 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:39.010720 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:39.011549 | orchestrator | 2025-06-02 19:47:39.012903 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-02 19:47:39.013681 | orchestrator | Monday 02 June 2025 19:47:38 +0000 (0:00:01.601) 0:00:12.764 *********** 2025-06-02 19:47:40.623683 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:40.624953 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:40.626484 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:40.626616 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:40.627167 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:40.627868 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:40.628826 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:40.629585 | orchestrator | 2025-06-02 19:47:40.630458 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-02 19:47:40.631337 | orchestrator | Monday 02 June 2025 19:47:40 +0000 (0:00:01.619) 0:00:14.384 *********** 2025-06-02 19:47:42.099488 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:42.100040 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:42.103996 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:42.104047 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:42.104066 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:42.104081 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:42.104163 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:42.104464 | orchestrator | 2025-06-02 19:47:42.104906 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-02 19:47:42.105315 | orchestrator | Monday 02 June 2025 19:47:42 +0000 (0:00:01.479) 0:00:15.863 *********** 2025-06-02 19:47:43.835693 | orchestrator | changed: [testbed-manager] 2025-06-02 19:47:43.838754 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:43.838805 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:43.838818 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:43.839334 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:43.839596 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:43.840198 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:43.840886 | orchestrator | 2025-06-02 19:47:43.841828 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-02 19:47:43.842472 | orchestrator | Monday 02 June 2025 19:47:43 +0000 (0:00:01.732) 0:00:17.596 *********** 2025-06-02 19:47:43.997834 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:47:44.082837 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:47:44.166634 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:47:44.242692 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:47:44.320919 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:44.448197 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:47:44.448800 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:47:44.449476 | orchestrator | 2025-06-02 19:47:44.452506 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-02 19:47:44.452571 | orchestrator | 2025-06-02 19:47:44.452591 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-02 19:47:44.452603 | orchestrator | Monday 02 June 2025 19:47:44 +0000 (0:00:00.615) 0:00:18.212 *********** 2025-06-02 19:47:47.067262 | orchestrator | ok: [testbed-manager] 2025-06-02 19:47:47.067432 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:47:47.068365 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:47:47.069696 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:47:47.071338 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:47:47.072116 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:47:47.072701 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:47:47.073555 | orchestrator | 2025-06-02 19:47:47.075406 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:47:47.075573 | orchestrator | 2025-06-02 19:47:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:47:47.075596 | orchestrator | 2025-06-02 19:47:47 | INFO  | Please wait and do not abort execution. 2025-06-02 19:47:47.076228 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:47:47.076903 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:47.078192 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:47.078570 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:47.078969 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:47.079855 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:47.080630 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:47.081061 | orchestrator | 2025-06-02 19:47:47.081715 | orchestrator | 2025-06-02 19:47:47.082429 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:47:47.083405 | orchestrator | Monday 02 June 2025 19:47:47 +0000 (0:00:02.617) 0:00:20.829 *********** 2025-06-02 19:47:47.083912 | orchestrator | =============================================================================== 2025-06-02 19:47:47.084668 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.83s 2025-06-02 19:47:47.085097 | orchestrator | Install python3-docker -------------------------------------------------- 2.62s 2025-06-02 19:47:47.085912 | orchestrator | Apply netplan configuration --------------------------------------------- 2.38s 2025-06-02 19:47:47.086224 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2025-06-02 19:47:47.086723 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.73s 2025-06-02 19:47:47.087272 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.62s 2025-06-02 19:47:47.087739 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.60s 2025-06-02 19:47:47.088369 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.48s 2025-06-02 19:47:47.088717 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.47s 2025-06-02 19:47:47.089081 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2025-06-02 19:47:47.089619 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.72s 2025-06-02 19:47:47.089993 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2025-06-02 19:47:47.707341 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-02 19:47:49.357919 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:47:49.358072 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:47:49.358090 | orchestrator | Registering Redlock._release_script 2025-06-02 19:47:49.416752 | orchestrator | 2025-06-02 19:47:49 | INFO  | Task d2aa809f-f81e-4534-ac9d-7a26943d5062 (reboot) was prepared for execution. 2025-06-02 19:47:49.416843 | orchestrator | 2025-06-02 19:47:49 | INFO  | It takes a moment until task d2aa809f-f81e-4534-ac9d-7a26943d5062 (reboot) has been started and output is visible here. 2025-06-02 19:47:53.389857 | orchestrator | 2025-06-02 19:47:53.392695 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:47:53.393732 | orchestrator | 2025-06-02 19:47:53.394965 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:47:53.395795 | orchestrator | Monday 02 June 2025 19:47:53 +0000 (0:00:00.216) 0:00:00.216 *********** 2025-06-02 19:47:53.495154 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:53.495890 | orchestrator | 2025-06-02 19:47:53.495982 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:47:53.496542 | orchestrator | Monday 02 June 2025 19:47:53 +0000 (0:00:00.106) 0:00:00.322 *********** 2025-06-02 19:47:54.460670 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:47:54.460764 | orchestrator | 2025-06-02 19:47:54.461623 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:47:54.462014 | orchestrator | Monday 02 June 2025 19:47:54 +0000 (0:00:00.966) 0:00:01.289 *********** 2025-06-02 19:47:54.579773 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:47:54.580136 | orchestrator | 2025-06-02 19:47:54.580944 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:47:54.582767 | orchestrator | 2025-06-02 19:47:54.585176 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:47:54.585920 | orchestrator | Monday 02 June 2025 19:47:54 +0000 (0:00:00.119) 0:00:01.408 *********** 2025-06-02 19:47:54.680013 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:47:54.681332 | orchestrator | 2025-06-02 19:47:54.682194 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:47:54.683280 | orchestrator | Monday 02 June 2025 19:47:54 +0000 (0:00:00.101) 0:00:01.510 *********** 2025-06-02 19:47:55.333232 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:47:55.333727 | orchestrator | 2025-06-02 19:47:55.334415 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:47:55.335279 | orchestrator | Monday 02 June 2025 19:47:55 +0000 (0:00:00.653) 0:00:02.163 *********** 2025-06-02 19:47:55.443999 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:47:55.444676 | orchestrator | 2025-06-02 19:47:55.445156 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:47:55.446132 | orchestrator | 2025-06-02 19:47:55.446625 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:47:55.447248 | orchestrator | Monday 02 June 2025 19:47:55 +0000 (0:00:00.107) 0:00:02.271 *********** 2025-06-02 19:47:55.635470 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:47:55.635857 | orchestrator | 2025-06-02 19:47:55.637276 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:47:55.638213 | orchestrator | Monday 02 June 2025 19:47:55 +0000 (0:00:00.193) 0:00:02.464 *********** 2025-06-02 19:47:56.327650 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:47:56.328553 | orchestrator | 2025-06-02 19:47:56.330599 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:47:56.331258 | orchestrator | Monday 02 June 2025 19:47:56 +0000 (0:00:00.693) 0:00:03.157 *********** 2025-06-02 19:47:56.441365 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:47:56.442135 | orchestrator | 2025-06-02 19:47:56.443585 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:47:56.445163 | orchestrator | 2025-06-02 19:47:56.448183 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:47:56.448689 | orchestrator | Monday 02 June 2025 19:47:56 +0000 (0:00:00.112) 0:00:03.270 *********** 2025-06-02 19:47:56.528068 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:47:56.528614 | orchestrator | 2025-06-02 19:47:56.530652 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:47:56.531315 | orchestrator | Monday 02 June 2025 19:47:56 +0000 (0:00:00.086) 0:00:03.357 *********** 2025-06-02 19:47:57.176112 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:47:57.176740 | orchestrator | 2025-06-02 19:47:57.177436 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:47:57.178609 | orchestrator | Monday 02 June 2025 19:47:57 +0000 (0:00:00.649) 0:00:04.006 *********** 2025-06-02 19:47:57.293696 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:47:57.293977 | orchestrator | 2025-06-02 19:47:57.296399 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:47:57.297244 | orchestrator | 2025-06-02 19:47:57.297852 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:47:57.299066 | orchestrator | Monday 02 June 2025 19:47:57 +0000 (0:00:00.114) 0:00:04.121 *********** 2025-06-02 19:47:57.403663 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:47:57.404639 | orchestrator | 2025-06-02 19:47:57.404810 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:47:57.405777 | orchestrator | Monday 02 June 2025 19:47:57 +0000 (0:00:00.112) 0:00:04.233 *********** 2025-06-02 19:47:58.068397 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:47:58.068655 | orchestrator | 2025-06-02 19:47:58.069702 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:47:58.070937 | orchestrator | Monday 02 June 2025 19:47:58 +0000 (0:00:00.663) 0:00:04.897 *********** 2025-06-02 19:47:58.173233 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:47:58.173485 | orchestrator | 2025-06-02 19:47:58.174731 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 19:47:58.176695 | orchestrator | 2025-06-02 19:47:58.176722 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 19:47:58.177311 | orchestrator | Monday 02 June 2025 19:47:58 +0000 (0:00:00.104) 0:00:05.001 *********** 2025-06-02 19:47:58.267500 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:47:58.268195 | orchestrator | 2025-06-02 19:47:58.269017 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 19:47:58.271006 | orchestrator | Monday 02 June 2025 19:47:58 +0000 (0:00:00.096) 0:00:05.097 *********** 2025-06-02 19:47:58.945229 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:47:58.945582 | orchestrator | 2025-06-02 19:47:58.947293 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 19:47:58.948325 | orchestrator | Monday 02 June 2025 19:47:58 +0000 (0:00:00.676) 0:00:05.774 *********** 2025-06-02 19:47:58.983105 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:47:58.983319 | orchestrator | 2025-06-02 19:47:58.984268 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:47:58.985384 | orchestrator | 2025-06-02 19:47:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:47:58.985413 | orchestrator | 2025-06-02 19:47:58 | INFO  | Please wait and do not abort execution. 2025-06-02 19:47:58.986165 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:58.987235 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:58.988057 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:58.988665 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:58.989464 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:58.990591 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:47:58.991427 | orchestrator | 2025-06-02 19:47:58.992314 | orchestrator | 2025-06-02 19:47:58.993188 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:47:58.993877 | orchestrator | Monday 02 June 2025 19:47:58 +0000 (0:00:00.039) 0:00:05.813 *********** 2025-06-02 19:47:58.994587 | orchestrator | =============================================================================== 2025-06-02 19:47:58.996657 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.30s 2025-06-02 19:47:58.997578 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.70s 2025-06-02 19:47:58.998593 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.60s 2025-06-02 19:47:59.531889 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-02 19:48:01.214309 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:48:01.214413 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:48:01.214426 | orchestrator | Registering Redlock._release_script 2025-06-02 19:48:01.274241 | orchestrator | 2025-06-02 19:48:01 | INFO  | Task bb894bfe-b3a3-48be-9e32-01b06f7f395b (wait-for-connection) was prepared for execution. 2025-06-02 19:48:01.274322 | orchestrator | 2025-06-02 19:48:01 | INFO  | It takes a moment until task bb894bfe-b3a3-48be-9e32-01b06f7f395b (wait-for-connection) has been started and output is visible here. 2025-06-02 19:48:05.110274 | orchestrator | 2025-06-02 19:48:05.112606 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-02 19:48:05.113466 | orchestrator | 2025-06-02 19:48:05.116266 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-02 19:48:05.116719 | orchestrator | Monday 02 June 2025 19:48:05 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-06-02 19:48:17.475023 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:17.475199 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:17.475228 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:17.476569 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:17.477279 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:17.478112 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:17.478822 | orchestrator | 2025-06-02 19:48:17.479733 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:48:17.480117 | orchestrator | 2025-06-02 19:48:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:48:17.480413 | orchestrator | 2025-06-02 19:48:17 | INFO  | Please wait and do not abort execution. 2025-06-02 19:48:17.481472 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:48:17.482249 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:48:17.483023 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:48:17.483572 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:48:17.484242 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:48:17.484627 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:48:17.485304 | orchestrator | 2025-06-02 19:48:17.485699 | orchestrator | 2025-06-02 19:48:17.486170 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:48:17.486813 | orchestrator | Monday 02 June 2025 19:48:17 +0000 (0:00:12.363) 0:00:12.538 *********** 2025-06-02 19:48:17.487118 | orchestrator | =============================================================================== 2025-06-02 19:48:17.488051 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.36s 2025-06-02 19:48:18.024310 | orchestrator | + osism apply hddtemp 2025-06-02 19:48:19.700818 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:48:19.700944 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:48:19.700971 | orchestrator | Registering Redlock._release_script 2025-06-02 19:48:19.757866 | orchestrator | 2025-06-02 19:48:19 | INFO  | Task 81838691-b862-4c3c-be40-39711687f3b2 (hddtemp) was prepared for execution. 2025-06-02 19:48:19.757969 | orchestrator | 2025-06-02 19:48:19 | INFO  | It takes a moment until task 81838691-b862-4c3c-be40-39711687f3b2 (hddtemp) has been started and output is visible here. 2025-06-02 19:48:23.822562 | orchestrator | 2025-06-02 19:48:23.824911 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-02 19:48:23.824986 | orchestrator | 2025-06-02 19:48:23.826838 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-02 19:48:23.827718 | orchestrator | Monday 02 June 2025 19:48:23 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-06-02 19:48:23.973888 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:24.052746 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:24.129563 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:24.216093 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:24.403636 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:24.539424 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:24.539637 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:24.539920 | orchestrator | 2025-06-02 19:48:24.540381 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-02 19:48:24.540938 | orchestrator | Monday 02 June 2025 19:48:24 +0000 (0:00:00.717) 0:00:00.976 *********** 2025-06-02 19:48:25.707051 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:48:25.707553 | orchestrator | 2025-06-02 19:48:25.711231 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-02 19:48:25.711264 | orchestrator | Monday 02 June 2025 19:48:25 +0000 (0:00:01.165) 0:00:02.142 *********** 2025-06-02 19:48:27.627115 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:27.629383 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:27.629423 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:27.629643 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:27.631326 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:27.632316 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:27.632753 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:27.633846 | orchestrator | 2025-06-02 19:48:27.634198 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-02 19:48:27.636053 | orchestrator | Monday 02 June 2025 19:48:27 +0000 (0:00:01.921) 0:00:04.064 *********** 2025-06-02 19:48:28.263972 | orchestrator | changed: [testbed-manager] 2025-06-02 19:48:28.351157 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:48:28.853853 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:48:28.855617 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:48:28.859349 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:48:28.860819 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:48:28.861633 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:48:28.863299 | orchestrator | 2025-06-02 19:48:28.864111 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-02 19:48:28.865070 | orchestrator | Monday 02 June 2025 19:48:28 +0000 (0:00:01.223) 0:00:05.287 *********** 2025-06-02 19:48:30.606671 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:48:30.606827 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:48:30.607032 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:48:30.607820 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:48:30.608557 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:48:30.608842 | orchestrator | ok: [testbed-manager] 2025-06-02 19:48:30.609319 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:48:30.609818 | orchestrator | 2025-06-02 19:48:30.610432 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-02 19:48:30.610784 | orchestrator | Monday 02 June 2025 19:48:30 +0000 (0:00:01.758) 0:00:07.046 *********** 2025-06-02 19:48:31.061252 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:48:31.152150 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:48:31.227885 | orchestrator | changed: [testbed-manager] 2025-06-02 19:48:31.309272 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:48:31.446841 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:48:31.447045 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:48:31.447974 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:48:31.449792 | orchestrator | 2025-06-02 19:48:31.453544 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-02 19:48:31.453569 | orchestrator | Monday 02 June 2025 19:48:31 +0000 (0:00:00.837) 0:00:07.883 *********** 2025-06-02 19:48:43.328706 | orchestrator | changed: [testbed-manager] 2025-06-02 19:48:43.331877 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:48:43.331913 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:48:43.332310 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:48:43.334002 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:48:43.334426 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:48:43.335391 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:48:43.335673 | orchestrator | 2025-06-02 19:48:43.336687 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-02 19:48:43.337090 | orchestrator | Monday 02 June 2025 19:48:43 +0000 (0:00:11.882) 0:00:19.766 *********** 2025-06-02 19:48:44.713977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:48:44.714680 | orchestrator | 2025-06-02 19:48:44.715275 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-02 19:48:44.716085 | orchestrator | Monday 02 June 2025 19:48:44 +0000 (0:00:01.383) 0:00:21.150 *********** 2025-06-02 19:48:46.605546 | orchestrator | changed: [testbed-manager] 2025-06-02 19:48:46.605917 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:48:46.607188 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:48:46.608198 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:48:46.610259 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:48:46.610987 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:48:46.611756 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:48:46.612549 | orchestrator | 2025-06-02 19:48:46.613188 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:48:46.613711 | orchestrator | 2025-06-02 19:48:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:48:46.614269 | orchestrator | 2025-06-02 19:48:46 | INFO  | Please wait and do not abort execution. 2025-06-02 19:48:46.615148 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:48:46.615931 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:48:46.616714 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:48:46.617291 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:48:46.617732 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:48:46.618254 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:48:46.618835 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:48:46.619372 | orchestrator | 2025-06-02 19:48:46.619780 | orchestrator | 2025-06-02 19:48:46.620336 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:48:46.620827 | orchestrator | Monday 02 June 2025 19:48:46 +0000 (0:00:01.892) 0:00:23.042 *********** 2025-06-02 19:48:46.621453 | orchestrator | =============================================================================== 2025-06-02 19:48:46.621860 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.88s 2025-06-02 19:48:46.622518 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.92s 2025-06-02 19:48:46.623374 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.89s 2025-06-02 19:48:46.623485 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.76s 2025-06-02 19:48:46.623960 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.38s 2025-06-02 19:48:46.624472 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2025-06-02 19:48:46.625035 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.17s 2025-06-02 19:48:46.625473 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.84s 2025-06-02 19:48:46.625910 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.72s 2025-06-02 19:48:47.251450 | orchestrator | ++ semver 9.1.0 7.1.1 2025-06-02 19:48:47.307267 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 19:48:47.307356 | orchestrator | + sudo systemctl restart manager.service 2025-06-02 19:49:00.718971 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 19:49:00.719025 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 19:49:00.719038 | orchestrator | + local max_attempts=60 2025-06-02 19:49:00.719050 | orchestrator | + local name=ceph-ansible 2025-06-02 19:49:00.719062 | orchestrator | + local attempt_num=1 2025-06-02 19:49:00.719073 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:00.741930 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:00.741978 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:00.741991 | orchestrator | + sleep 5 2025-06-02 19:49:05.745687 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:05.776296 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:05.776393 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:05.776408 | orchestrator | + sleep 5 2025-06-02 19:49:10.780961 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:10.815965 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:10.816056 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:10.816070 | orchestrator | + sleep 5 2025-06-02 19:49:15.820861 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:15.863865 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:15.863955 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:15.863969 | orchestrator | + sleep 5 2025-06-02 19:49:20.869604 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:20.908830 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:20.908915 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:20.908932 | orchestrator | + sleep 5 2025-06-02 19:49:25.913950 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:25.955084 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:25.955139 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:25.955152 | orchestrator | + sleep 5 2025-06-02 19:49:30.960717 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:30.997681 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:30.997771 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:30.997785 | orchestrator | + sleep 5 2025-06-02 19:49:36.005197 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:36.046290 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:36.046389 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:36.046404 | orchestrator | + sleep 5 2025-06-02 19:49:41.052696 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:41.085582 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:41.085678 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:41.085694 | orchestrator | + sleep 5 2025-06-02 19:49:46.087154 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:46.121694 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:46.121764 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:46.121777 | orchestrator | + sleep 5 2025-06-02 19:49:51.126550 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:51.165811 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:51.165942 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:51.165958 | orchestrator | + sleep 5 2025-06-02 19:49:56.170999 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:49:56.207023 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:49:56.207107 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:49:56.207120 | orchestrator | + sleep 5 2025-06-02 19:50:01.211901 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:50:01.249216 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 19:50:01.249287 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 19:50:01.249302 | orchestrator | + sleep 5 2025-06-02 19:50:06.254778 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 19:50:06.294367 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:50:06.294504 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 19:50:06.294567 | orchestrator | + local max_attempts=60 2025-06-02 19:50:06.294586 | orchestrator | + local name=kolla-ansible 2025-06-02 19:50:06.294602 | orchestrator | + local attempt_num=1 2025-06-02 19:50:06.295747 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 19:50:06.330432 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:50:06.330532 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 19:50:06.330542 | orchestrator | + local max_attempts=60 2025-06-02 19:50:06.330550 | orchestrator | + local name=osism-ansible 2025-06-02 19:50:06.330557 | orchestrator | + local attempt_num=1 2025-06-02 19:50:06.331503 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 19:50:06.366050 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 19:50:06.366106 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 19:50:06.366112 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 19:50:06.541663 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-02 19:50:06.691840 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-02 19:50:06.822185 | orchestrator | ARA in osism-ansible already disabled. 2025-06-02 19:50:06.975089 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-02 19:50:06.975645 | orchestrator | + osism apply gather-facts 2025-06-02 19:50:08.688052 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:50:08.688151 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:50:08.688166 | orchestrator | Registering Redlock._release_script 2025-06-02 19:50:08.760874 | orchestrator | 2025-06-02 19:50:08 | INFO  | Task 8b427113-7689-4cc1-8a71-ff31640473b3 (gather-facts) was prepared for execution. 2025-06-02 19:50:08.760984 | orchestrator | 2025-06-02 19:50:08 | INFO  | It takes a moment until task 8b427113-7689-4cc1-8a71-ff31640473b3 (gather-facts) has been started and output is visible here. 2025-06-02 19:50:12.695077 | orchestrator | 2025-06-02 19:50:12.695194 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 19:50:12.697064 | orchestrator | 2025-06-02 19:50:12.698806 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:50:12.699333 | orchestrator | Monday 02 June 2025 19:50:12 +0000 (0:00:00.213) 0:00:00.213 *********** 2025-06-02 19:50:18.548895 | orchestrator | ok: [testbed-manager] 2025-06-02 19:50:18.549831 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:50:18.550379 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:50:18.551936 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:50:18.553776 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:50:18.554806 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:50:18.555638 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:50:18.557164 | orchestrator | 2025-06-02 19:50:18.561093 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 19:50:18.561287 | orchestrator | 2025-06-02 19:50:18.562634 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 19:50:18.563237 | orchestrator | Monday 02 June 2025 19:50:18 +0000 (0:00:05.861) 0:00:06.075 *********** 2025-06-02 19:50:18.684805 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:50:18.747674 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:50:18.815801 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:50:18.885669 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:50:18.948916 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:50:18.984761 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:50:18.984862 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:50:18.985307 | orchestrator | 2025-06-02 19:50:18.985911 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:50:18.986257 | orchestrator | 2025-06-02 19:50:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:50:18.987219 | orchestrator | 2025-06-02 19:50:18 | INFO  | Please wait and do not abort execution. 2025-06-02 19:50:18.988247 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:18.989236 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:18.990541 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:18.991465 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:18.992358 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:18.993389 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:18.994130 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 19:50:18.994770 | orchestrator | 2025-06-02 19:50:18.995433 | orchestrator | 2025-06-02 19:50:18.996150 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:50:18.996755 | orchestrator | Monday 02 June 2025 19:50:18 +0000 (0:00:00.436) 0:00:06.512 *********** 2025-06-02 19:50:18.997175 | orchestrator | =============================================================================== 2025-06-02 19:50:18.997824 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.86s 2025-06-02 19:50:18.998617 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2025-06-02 19:50:19.399922 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-02 19:50:19.407393 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-02 19:50:19.415763 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-02 19:50:19.424967 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-02 19:50:19.445261 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-02 19:50:19.453348 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-02 19:50:19.465390 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-02 19:50:19.482368 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-02 19:50:19.495643 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-02 19:50:19.510955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-02 19:50:19.524830 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-02 19:50:19.537227 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-02 19:50:19.558000 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-02 19:50:19.576712 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-02 19:50:19.592052 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-02 19:50:19.607691 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-02 19:50:19.621533 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-02 19:50:19.641374 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-02 19:50:19.652720 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-02 19:50:19.670614 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-02 19:50:19.688454 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-02 19:50:19.949758 | orchestrator | ok: Runtime: 0:19:52.056736 2025-06-02 19:50:20.047249 | 2025-06-02 19:50:20.047391 | TASK [Deploy services] 2025-06-02 19:50:20.580687 | orchestrator | skipping: Conditional result was False 2025-06-02 19:50:20.598295 | 2025-06-02 19:50:20.598496 | TASK [Deploy in a nutshell] 2025-06-02 19:50:21.315017 | orchestrator | 2025-06-02 19:50:21.315143 | orchestrator | # PULL IMAGES 2025-06-02 19:50:21.315152 | orchestrator | 2025-06-02 19:50:21.315158 | orchestrator | + set -e 2025-06-02 19:50:21.315165 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 19:50:21.315174 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 19:50:21.315180 | orchestrator | ++ INTERACTIVE=false 2025-06-02 19:50:21.315201 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 19:50:21.315211 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 19:50:21.315216 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 19:50:21.315221 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 19:50:21.315228 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 19:50:21.315232 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 19:50:21.315239 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 19:50:21.315243 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 19:50:21.315250 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 19:50:21.315254 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 19:50:21.315260 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 19:50:21.315264 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 19:50:21.315268 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 19:50:21.315272 | orchestrator | ++ export ARA=false 2025-06-02 19:50:21.315276 | orchestrator | ++ ARA=false 2025-06-02 19:50:21.315280 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 19:50:21.315283 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 19:50:21.315287 | orchestrator | ++ export TEMPEST=false 2025-06-02 19:50:21.315291 | orchestrator | ++ TEMPEST=false 2025-06-02 19:50:21.315294 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 19:50:21.315298 | orchestrator | ++ IS_ZUUL=true 2025-06-02 19:50:21.315302 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2025-06-02 19:50:21.315306 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2025-06-02 19:50:21.315310 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 19:50:21.315314 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 19:50:21.315317 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 19:50:21.315321 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 19:50:21.315325 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 19:50:21.315329 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 19:50:21.315332 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 19:50:21.315340 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 19:50:21.315344 | orchestrator | + echo 2025-06-02 19:50:21.315348 | orchestrator | + echo '# PULL IMAGES' 2025-06-02 19:50:21.315352 | orchestrator | + echo 2025-06-02 19:50:21.315364 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 19:50:21.372162 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 19:50:21.372238 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-02 19:50:22.956010 | orchestrator | 2025-06-02 19:50:22 | INFO  | Trying to run play pull-images in environment custom 2025-06-02 19:50:22.960755 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:50:22.960838 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:50:22.960853 | orchestrator | Registering Redlock._release_script 2025-06-02 19:50:23.024561 | orchestrator | 2025-06-02 19:50:23 | INFO  | Task 518b4bd2-fe37-4e49-82ab-e722e9c7113c (pull-images) was prepared for execution. 2025-06-02 19:50:23.024599 | orchestrator | 2025-06-02 19:50:23 | INFO  | It takes a moment until task 518b4bd2-fe37-4e49-82ab-e722e9c7113c (pull-images) has been started and output is visible here. 2025-06-02 19:50:26.999816 | orchestrator | 2025-06-02 19:50:27.003329 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-02 19:50:27.004368 | orchestrator | 2025-06-02 19:50:27.006031 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-02 19:50:27.007011 | orchestrator | Monday 02 June 2025 19:50:26 +0000 (0:00:00.155) 0:00:00.155 *********** 2025-06-02 19:51:36.805642 | orchestrator | changed: [testbed-manager] 2025-06-02 19:51:36.805757 | orchestrator | 2025-06-02 19:51:36.805875 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-02 19:51:36.807632 | orchestrator | Monday 02 June 2025 19:51:36 +0000 (0:01:09.808) 0:01:09.964 *********** 2025-06-02 19:52:26.191545 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-02 19:52:26.191784 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-02 19:52:26.191815 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-02 19:52:26.191830 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-02 19:52:26.193016 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-02 19:52:26.194345 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-02 19:52:26.195026 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-02 19:52:26.196043 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-02 19:52:26.196459 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-02 19:52:26.197452 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-02 19:52:26.197970 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-02 19:52:26.198386 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-02 19:52:26.198905 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-02 19:52:26.199818 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-02 19:52:26.200003 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-02 19:52:26.200728 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-02 19:52:26.201217 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-02 19:52:26.201712 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-02 19:52:26.202463 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-02 19:52:26.202761 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-02 19:52:26.203182 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-02 19:52:26.203767 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-02 19:52:26.204267 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-02 19:52:26.204671 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-02 19:52:26.205485 | orchestrator | 2025-06-02 19:52:26.205609 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:52:26.206129 | orchestrator | 2025-06-02 19:52:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:52:26.206156 | orchestrator | 2025-06-02 19:52:26 | INFO  | Please wait and do not abort execution. 2025-06-02 19:52:26.206585 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:52:26.206934 | orchestrator | 2025-06-02 19:52:26.207598 | orchestrator | 2025-06-02 19:52:26.207794 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:52:26.208299 | orchestrator | Monday 02 June 2025 19:52:26 +0000 (0:00:49.384) 0:01:59.348 *********** 2025-06-02 19:52:26.208675 | orchestrator | =============================================================================== 2025-06-02 19:52:26.209042 | orchestrator | Pull keystone image ---------------------------------------------------- 69.81s 2025-06-02 19:52:26.209533 | orchestrator | Pull other images ------------------------------------------------------ 49.38s 2025-06-02 19:52:28.417879 | orchestrator | 2025-06-02 19:52:28 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-02 19:52:28.422597 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:52:28.422658 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:52:28.422672 | orchestrator | Registering Redlock._release_script 2025-06-02 19:52:28.485583 | orchestrator | 2025-06-02 19:52:28 | INFO  | Task 246cee4e-8b1c-4ad8-9701-0ab232570568 (wipe-partitions) was prepared for execution. 2025-06-02 19:52:28.485663 | orchestrator | 2025-06-02 19:52:28 | INFO  | It takes a moment until task 246cee4e-8b1c-4ad8-9701-0ab232570568 (wipe-partitions) has been started and output is visible here. 2025-06-02 19:52:32.518123 | orchestrator | 2025-06-02 19:52:32.518235 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-02 19:52:32.518318 | orchestrator | 2025-06-02 19:52:32.521185 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-02 19:52:32.523157 | orchestrator | Monday 02 June 2025 19:52:32 +0000 (0:00:00.130) 0:00:00.130 *********** 2025-06-02 19:52:33.095483 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:52:33.095609 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:52:33.096454 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:52:33.096482 | orchestrator | 2025-06-02 19:52:33.096832 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-02 19:52:33.099705 | orchestrator | Monday 02 June 2025 19:52:33 +0000 (0:00:00.580) 0:00:00.710 *********** 2025-06-02 19:52:33.261648 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:52:33.364662 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:52:33.365270 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:52:33.365639 | orchestrator | 2025-06-02 19:52:33.366606 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-02 19:52:33.367500 | orchestrator | Monday 02 June 2025 19:52:33 +0000 (0:00:00.270) 0:00:00.980 *********** 2025-06-02 19:52:34.087162 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:52:34.088552 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:52:34.092665 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:52:34.093647 | orchestrator | 2025-06-02 19:52:34.094405 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-02 19:52:34.095135 | orchestrator | Monday 02 June 2025 19:52:34 +0000 (0:00:00.719) 0:00:01.700 *********** 2025-06-02 19:52:34.240292 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:52:34.332891 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:52:34.333220 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:52:34.333761 | orchestrator | 2025-06-02 19:52:34.335239 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-02 19:52:34.336092 | orchestrator | Monday 02 June 2025 19:52:34 +0000 (0:00:00.247) 0:00:01.947 *********** 2025-06-02 19:52:35.480483 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 19:52:35.481143 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 19:52:35.481922 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 19:52:35.482362 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 19:52:35.482729 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 19:52:35.483350 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 19:52:35.483709 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 19:52:35.488530 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 19:52:35.488631 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 19:52:35.488645 | orchestrator | 2025-06-02 19:52:35.488658 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-02 19:52:35.488833 | orchestrator | Monday 02 June 2025 19:52:35 +0000 (0:00:01.148) 0:00:03.096 *********** 2025-06-02 19:52:36.812397 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 19:52:36.812589 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 19:52:36.812894 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 19:52:36.813226 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 19:52:36.815265 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 19:52:36.815556 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 19:52:36.815879 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 19:52:36.819080 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 19:52:36.819239 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 19:52:36.819797 | orchestrator | 2025-06-02 19:52:36.820060 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-02 19:52:36.820467 | orchestrator | Monday 02 June 2025 19:52:36 +0000 (0:00:01.330) 0:00:04.426 *********** 2025-06-02 19:52:39.135189 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 19:52:39.136594 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 19:52:39.137121 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 19:52:39.137600 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 19:52:39.137946 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 19:52:39.138259 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 19:52:39.141117 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 19:52:39.141243 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 19:52:39.141728 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 19:52:39.142082 | orchestrator | 2025-06-02 19:52:39.142385 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-02 19:52:39.142807 | orchestrator | Monday 02 June 2025 19:52:39 +0000 (0:00:02.323) 0:00:06.750 *********** 2025-06-02 19:52:39.757917 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:52:39.758075 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:52:39.758234 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:52:39.758694 | orchestrator | 2025-06-02 19:52:39.762278 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-02 19:52:39.762328 | orchestrator | Monday 02 June 2025 19:52:39 +0000 (0:00:00.622) 0:00:07.372 *********** 2025-06-02 19:52:40.427536 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:52:40.427661 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:52:40.427984 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:52:40.428400 | orchestrator | 2025-06-02 19:52:40.428789 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:52:40.429120 | orchestrator | 2025-06-02 19:52:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:52:40.429646 | orchestrator | 2025-06-02 19:52:40 | INFO  | Please wait and do not abort execution. 2025-06-02 19:52:40.429819 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:52:40.430294 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:52:40.430777 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:52:40.430966 | orchestrator | 2025-06-02 19:52:40.431551 | orchestrator | 2025-06-02 19:52:40.433982 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:52:40.434122 | orchestrator | Monday 02 June 2025 19:52:40 +0000 (0:00:00.666) 0:00:08.039 *********** 2025-06-02 19:52:40.434404 | orchestrator | =============================================================================== 2025-06-02 19:52:40.434697 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.32s 2025-06-02 19:52:40.435064 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-06-02 19:52:40.435609 | orchestrator | Check device availability ----------------------------------------------- 1.15s 2025-06-02 19:52:40.435901 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.72s 2025-06-02 19:52:40.436190 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2025-06-02 19:52:40.439368 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2025-06-02 19:52:40.439500 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2025-06-02 19:52:40.439762 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-06-02 19:52:40.440263 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-06-02 19:52:42.556484 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:52:42.556589 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:52:42.556605 | orchestrator | Registering Redlock._release_script 2025-06-02 19:52:42.608440 | orchestrator | 2025-06-02 19:52:42 | INFO  | Task d16ab5c4-d27b-4f14-8c37-825a4bfe1abe (facts) was prepared for execution. 2025-06-02 19:52:42.608514 | orchestrator | 2025-06-02 19:52:42 | INFO  | It takes a moment until task d16ab5c4-d27b-4f14-8c37-825a4bfe1abe (facts) has been started and output is visible here. 2025-06-02 19:52:46.636720 | orchestrator | 2025-06-02 19:52:46.639948 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 19:52:46.639998 | orchestrator | 2025-06-02 19:52:46.640011 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 19:52:46.640073 | orchestrator | Monday 02 June 2025 19:52:46 +0000 (0:00:00.242) 0:00:00.242 *********** 2025-06-02 19:52:47.586880 | orchestrator | ok: [testbed-manager] 2025-06-02 19:52:47.586965 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:52:47.587040 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:52:47.588103 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:52:47.589091 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:52:47.589699 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:52:47.590707 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:52:47.591191 | orchestrator | 2025-06-02 19:52:47.591934 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 19:52:47.592311 | orchestrator | Monday 02 June 2025 19:52:47 +0000 (0:00:00.949) 0:00:01.191 *********** 2025-06-02 19:52:47.727784 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:52:47.798272 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:52:47.867828 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:52:47.934933 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:52:48.006611 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:52:48.634445 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:52:48.636370 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:52:48.636393 | orchestrator | 2025-06-02 19:52:48.637984 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 19:52:48.638736 | orchestrator | 2025-06-02 19:52:48.639255 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:52:48.640411 | orchestrator | Monday 02 June 2025 19:52:48 +0000 (0:00:01.052) 0:00:02.244 *********** 2025-06-02 19:52:53.269746 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:52:53.269820 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:52:53.272853 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:52:53.272947 | orchestrator | ok: [testbed-manager] 2025-06-02 19:52:53.273359 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:52:53.276908 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:52:53.277212 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:52:53.277553 | orchestrator | 2025-06-02 19:52:53.277838 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 19:52:53.278239 | orchestrator | 2025-06-02 19:52:53.279772 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 19:52:53.280227 | orchestrator | Monday 02 June 2025 19:52:53 +0000 (0:00:04.634) 0:00:06.878 *********** 2025-06-02 19:52:53.458549 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:52:53.538963 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:52:53.618806 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:52:53.728205 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:52:53.813021 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:52:53.854954 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:52:53.856846 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:52:53.859698 | orchestrator | 2025-06-02 19:52:53.860048 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:52:53.860389 | orchestrator | 2025-06-02 19:52:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:52:53.860507 | orchestrator | 2025-06-02 19:52:53 | INFO  | Please wait and do not abort execution. 2025-06-02 19:52:53.860985 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:52:53.861506 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:52:53.861793 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:52:53.862156 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:52:53.862575 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:52:53.862926 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:52:53.863334 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:52:53.863718 | orchestrator | 2025-06-02 19:52:53.863984 | orchestrator | 2025-06-02 19:52:53.866647 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:52:53.866976 | orchestrator | Monday 02 June 2025 19:52:53 +0000 (0:00:00.587) 0:00:07.465 *********** 2025-06-02 19:52:53.869407 | orchestrator | =============================================================================== 2025-06-02 19:52:53.869415 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.63s 2025-06-02 19:52:53.870674 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2025-06-02 19:52:53.871066 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.95s 2025-06-02 19:52:53.871521 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-06-02 19:52:56.412822 | orchestrator | 2025-06-02 19:52:56 | INFO  | Task 3c90f5a7-28ae-4498-9d0b-324a08e4a894 (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-02 19:52:56.414331 | orchestrator | 2025-06-02 19:52:56 | INFO  | It takes a moment until task 3c90f5a7-28ae-4498-9d0b-324a08e4a894 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-02 19:53:01.035900 | orchestrator | 2025-06-02 19:53:01.036717 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 19:53:01.036837 | orchestrator | 2025-06-02 19:53:01.040977 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:53:01.041449 | orchestrator | Monday 02 June 2025 19:53:01 +0000 (0:00:00.387) 0:00:00.387 *********** 2025-06-02 19:53:01.290191 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 19:53:01.292784 | orchestrator | 2025-06-02 19:53:01.296614 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:53:01.296986 | orchestrator | Monday 02 June 2025 19:53:01 +0000 (0:00:00.254) 0:00:00.641 *********** 2025-06-02 19:53:01.509182 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:53:01.509415 | orchestrator | 2025-06-02 19:53:01.512577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:01.540728 | orchestrator | Monday 02 June 2025 19:53:01 +0000 (0:00:00.220) 0:00:00.862 *********** 2025-06-02 19:53:01.898351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 19:53:01.900671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 19:53:01.902258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 19:53:01.902807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 19:53:01.903263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 19:53:01.904011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 19:53:01.904997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 19:53:01.906384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 19:53:01.906893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 19:53:01.907589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 19:53:01.907978 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 19:53:01.908403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 19:53:01.908861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 19:53:01.910730 | orchestrator | 2025-06-02 19:53:01.911023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:01.911689 | orchestrator | Monday 02 June 2025 19:53:01 +0000 (0:00:00.388) 0:00:01.251 *********** 2025-06-02 19:53:02.320611 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:02.320724 | orchestrator | 2025-06-02 19:53:02.320977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:02.321465 | orchestrator | Monday 02 June 2025 19:53:02 +0000 (0:00:00.417) 0:00:01.669 *********** 2025-06-02 19:53:02.552940 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:02.553816 | orchestrator | 2025-06-02 19:53:02.554843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:02.554876 | orchestrator | Monday 02 June 2025 19:53:02 +0000 (0:00:00.238) 0:00:01.908 *********** 2025-06-02 19:53:02.722796 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:02.725751 | orchestrator | 2025-06-02 19:53:02.725813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:02.725827 | orchestrator | Monday 02 June 2025 19:53:02 +0000 (0:00:00.167) 0:00:02.075 *********** 2025-06-02 19:53:02.903693 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:02.903800 | orchestrator | 2025-06-02 19:53:02.903873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:02.904078 | orchestrator | Monday 02 June 2025 19:53:02 +0000 (0:00:00.178) 0:00:02.254 *********** 2025-06-02 19:53:03.128503 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:03.128618 | orchestrator | 2025-06-02 19:53:03.128703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:03.131388 | orchestrator | Monday 02 June 2025 19:53:03 +0000 (0:00:00.223) 0:00:02.478 *********** 2025-06-02 19:53:03.348404 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:03.349399 | orchestrator | 2025-06-02 19:53:03.349721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:03.353679 | orchestrator | Monday 02 June 2025 19:53:03 +0000 (0:00:00.223) 0:00:02.701 *********** 2025-06-02 19:53:03.539760 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:03.540465 | orchestrator | 2025-06-02 19:53:03.540512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:03.543897 | orchestrator | Monday 02 June 2025 19:53:03 +0000 (0:00:00.192) 0:00:02.893 *********** 2025-06-02 19:53:03.711524 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:03.711982 | orchestrator | 2025-06-02 19:53:03.716494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:03.716523 | orchestrator | Monday 02 June 2025 19:53:03 +0000 (0:00:00.172) 0:00:03.065 *********** 2025-06-02 19:53:04.098906 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab) 2025-06-02 19:53:04.099014 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab) 2025-06-02 19:53:04.101737 | orchestrator | 2025-06-02 19:53:04.102534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:04.102679 | orchestrator | Monday 02 June 2025 19:53:04 +0000 (0:00:00.386) 0:00:03.452 *********** 2025-06-02 19:53:04.481401 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba) 2025-06-02 19:53:04.481693 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba) 2025-06-02 19:53:04.481951 | orchestrator | 2025-06-02 19:53:04.483270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:04.484233 | orchestrator | Monday 02 June 2025 19:53:04 +0000 (0:00:00.382) 0:00:03.834 *********** 2025-06-02 19:53:04.936799 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40) 2025-06-02 19:53:04.938235 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40) 2025-06-02 19:53:04.939530 | orchestrator | 2025-06-02 19:53:04.940478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:04.941576 | orchestrator | Monday 02 June 2025 19:53:04 +0000 (0:00:00.454) 0:00:04.288 *********** 2025-06-02 19:53:05.477560 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f) 2025-06-02 19:53:05.478363 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f) 2025-06-02 19:53:05.479770 | orchestrator | 2025-06-02 19:53:05.480942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:05.481683 | orchestrator | Monday 02 June 2025 19:53:05 +0000 (0:00:00.540) 0:00:04.829 *********** 2025-06-02 19:53:06.107077 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:53:06.108687 | orchestrator | 2025-06-02 19:53:06.109825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:06.110934 | orchestrator | Monday 02 June 2025 19:53:06 +0000 (0:00:00.632) 0:00:05.461 *********** 2025-06-02 19:53:06.558484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 19:53:06.559709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 19:53:06.562302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 19:53:06.563062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 19:53:06.565329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 19:53:06.565945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 19:53:06.567894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 19:53:06.570590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 19:53:06.572858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 19:53:06.574727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 19:53:06.575881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 19:53:06.576895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 19:53:06.577734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 19:53:06.578134 | orchestrator | 2025-06-02 19:53:06.578604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:06.578991 | orchestrator | Monday 02 June 2025 19:53:06 +0000 (0:00:00.449) 0:00:05.911 *********** 2025-06-02 19:53:06.828162 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:06.828248 | orchestrator | 2025-06-02 19:53:06.828258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:06.828318 | orchestrator | Monday 02 June 2025 19:53:06 +0000 (0:00:00.269) 0:00:06.180 *********** 2025-06-02 19:53:07.040818 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:07.040957 | orchestrator | 2025-06-02 19:53:07.041897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:07.044088 | orchestrator | Monday 02 June 2025 19:53:07 +0000 (0:00:00.211) 0:00:06.391 *********** 2025-06-02 19:53:07.251649 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:07.252901 | orchestrator | 2025-06-02 19:53:07.254709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:07.256588 | orchestrator | Monday 02 June 2025 19:53:07 +0000 (0:00:00.210) 0:00:06.601 *********** 2025-06-02 19:53:07.477886 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:07.480942 | orchestrator | 2025-06-02 19:53:07.481375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:07.482842 | orchestrator | Monday 02 June 2025 19:53:07 +0000 (0:00:00.225) 0:00:06.827 *********** 2025-06-02 19:53:07.683134 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:07.683713 | orchestrator | 2025-06-02 19:53:07.684707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:07.686652 | orchestrator | Monday 02 June 2025 19:53:07 +0000 (0:00:00.208) 0:00:07.036 *********** 2025-06-02 19:53:07.888806 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:07.890666 | orchestrator | 2025-06-02 19:53:07.894201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:07.897613 | orchestrator | Monday 02 June 2025 19:53:07 +0000 (0:00:00.206) 0:00:07.242 *********** 2025-06-02 19:53:08.083518 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:08.084745 | orchestrator | 2025-06-02 19:53:08.085032 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:08.086633 | orchestrator | Monday 02 June 2025 19:53:08 +0000 (0:00:00.195) 0:00:07.437 *********** 2025-06-02 19:53:08.261954 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:08.264349 | orchestrator | 2025-06-02 19:53:08.266114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:08.266717 | orchestrator | Monday 02 June 2025 19:53:08 +0000 (0:00:00.178) 0:00:07.615 *********** 2025-06-02 19:53:09.283394 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 19:53:09.284040 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 19:53:09.285624 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 19:53:09.289658 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 19:53:09.290499 | orchestrator | 2025-06-02 19:53:09.291937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:09.292926 | orchestrator | Monday 02 June 2025 19:53:09 +0000 (0:00:01.019) 0:00:08.635 *********** 2025-06-02 19:53:09.499189 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:09.500972 | orchestrator | 2025-06-02 19:53:09.503714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:09.505644 | orchestrator | Monday 02 June 2025 19:53:09 +0000 (0:00:00.214) 0:00:08.849 *********** 2025-06-02 19:53:09.682599 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:09.684638 | orchestrator | 2025-06-02 19:53:09.685924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:09.687355 | orchestrator | Monday 02 June 2025 19:53:09 +0000 (0:00:00.185) 0:00:09.035 *********** 2025-06-02 19:53:09.893949 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:09.895500 | orchestrator | 2025-06-02 19:53:09.896823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:09.901086 | orchestrator | Monday 02 June 2025 19:53:09 +0000 (0:00:00.212) 0:00:09.248 *********** 2025-06-02 19:53:10.111391 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:10.112041 | orchestrator | 2025-06-02 19:53:10.113122 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 19:53:10.113407 | orchestrator | Monday 02 June 2025 19:53:10 +0000 (0:00:00.216) 0:00:09.464 *********** 2025-06-02 19:53:10.290930 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-02 19:53:10.292855 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-02 19:53:10.293066 | orchestrator | 2025-06-02 19:53:10.295518 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 19:53:10.297174 | orchestrator | Monday 02 June 2025 19:53:10 +0000 (0:00:00.180) 0:00:09.644 *********** 2025-06-02 19:53:10.434675 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:10.434862 | orchestrator | 2025-06-02 19:53:10.435620 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 19:53:10.439700 | orchestrator | Monday 02 June 2025 19:53:10 +0000 (0:00:00.141) 0:00:09.786 *********** 2025-06-02 19:53:10.574313 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:10.574540 | orchestrator | 2025-06-02 19:53:10.574561 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 19:53:10.574647 | orchestrator | Monday 02 June 2025 19:53:10 +0000 (0:00:00.138) 0:00:09.924 *********** 2025-06-02 19:53:10.711270 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:10.712886 | orchestrator | 2025-06-02 19:53:10.712950 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 19:53:10.713488 | orchestrator | Monday 02 June 2025 19:53:10 +0000 (0:00:00.137) 0:00:10.061 *********** 2025-06-02 19:53:10.834389 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:53:10.836049 | orchestrator | 2025-06-02 19:53:10.837578 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 19:53:10.839050 | orchestrator | Monday 02 June 2025 19:53:10 +0000 (0:00:00.126) 0:00:10.187 *********** 2025-06-02 19:53:11.011329 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93e9f309-356a-50f8-bf6b-26db11b00033'}}) 2025-06-02 19:53:11.013080 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '01a13ba8-1f69-5051-bec5-e01e7e9b87e5'}}) 2025-06-02 19:53:11.013257 | orchestrator | 2025-06-02 19:53:11.015338 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 19:53:11.016068 | orchestrator | Monday 02 June 2025 19:53:11 +0000 (0:00:00.175) 0:00:10.363 *********** 2025-06-02 19:53:11.184402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93e9f309-356a-50f8-bf6b-26db11b00033'}})  2025-06-02 19:53:11.184739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '01a13ba8-1f69-5051-bec5-e01e7e9b87e5'}})  2025-06-02 19:53:11.185232 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:11.187750 | orchestrator | 2025-06-02 19:53:11.188585 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 19:53:11.188612 | orchestrator | Monday 02 June 2025 19:53:11 +0000 (0:00:00.173) 0:00:10.536 *********** 2025-06-02 19:53:11.533774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93e9f309-356a-50f8-bf6b-26db11b00033'}})  2025-06-02 19:53:11.534991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '01a13ba8-1f69-5051-bec5-e01e7e9b87e5'}})  2025-06-02 19:53:11.535934 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:11.536924 | orchestrator | 2025-06-02 19:53:11.539562 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 19:53:11.539633 | orchestrator | Monday 02 June 2025 19:53:11 +0000 (0:00:00.350) 0:00:10.887 *********** 2025-06-02 19:53:11.687237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93e9f309-356a-50f8-bf6b-26db11b00033'}})  2025-06-02 19:53:11.689637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '01a13ba8-1f69-5051-bec5-e01e7e9b87e5'}})  2025-06-02 19:53:11.693979 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:11.694885 | orchestrator | 2025-06-02 19:53:11.695713 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 19:53:11.696130 | orchestrator | Monday 02 June 2025 19:53:11 +0000 (0:00:00.152) 0:00:11.040 *********** 2025-06-02 19:53:11.852738 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:53:11.854297 | orchestrator | 2025-06-02 19:53:11.855901 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 19:53:11.856942 | orchestrator | Monday 02 June 2025 19:53:11 +0000 (0:00:00.165) 0:00:11.205 *********** 2025-06-02 19:53:11.980064 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:53:11.981560 | orchestrator | 2025-06-02 19:53:11.983304 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 19:53:11.985094 | orchestrator | Monday 02 June 2025 19:53:11 +0000 (0:00:00.128) 0:00:11.334 *********** 2025-06-02 19:53:12.122830 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:12.123351 | orchestrator | 2025-06-02 19:53:12.125397 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 19:53:12.126741 | orchestrator | Monday 02 June 2025 19:53:12 +0000 (0:00:00.140) 0:00:11.475 *********** 2025-06-02 19:53:12.281021 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:12.282173 | orchestrator | 2025-06-02 19:53:12.283484 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 19:53:12.285346 | orchestrator | Monday 02 June 2025 19:53:12 +0000 (0:00:00.158) 0:00:11.633 *********** 2025-06-02 19:53:12.448603 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:12.450496 | orchestrator | 2025-06-02 19:53:12.451570 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 19:53:12.455614 | orchestrator | Monday 02 June 2025 19:53:12 +0000 (0:00:00.167) 0:00:11.801 *********** 2025-06-02 19:53:12.609803 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:53:12.610172 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:53:12.611956 | orchestrator |  "sdb": { 2025-06-02 19:53:12.612766 | orchestrator |  "osd_lvm_uuid": "93e9f309-356a-50f8-bf6b-26db11b00033" 2025-06-02 19:53:12.616502 | orchestrator |  }, 2025-06-02 19:53:12.616970 | orchestrator |  "sdc": { 2025-06-02 19:53:12.617893 | orchestrator |  "osd_lvm_uuid": "01a13ba8-1f69-5051-bec5-e01e7e9b87e5" 2025-06-02 19:53:12.620623 | orchestrator |  } 2025-06-02 19:53:12.620981 | orchestrator |  } 2025-06-02 19:53:12.621967 | orchestrator | } 2025-06-02 19:53:12.622596 | orchestrator | 2025-06-02 19:53:12.623475 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 19:53:12.624249 | orchestrator | Monday 02 June 2025 19:53:12 +0000 (0:00:00.161) 0:00:11.962 *********** 2025-06-02 19:53:12.750524 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:12.751256 | orchestrator | 2025-06-02 19:53:12.753094 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 19:53:12.753660 | orchestrator | Monday 02 June 2025 19:53:12 +0000 (0:00:00.141) 0:00:12.104 *********** 2025-06-02 19:53:12.900120 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:12.901753 | orchestrator | 2025-06-02 19:53:12.902903 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 19:53:12.906312 | orchestrator | Monday 02 June 2025 19:53:12 +0000 (0:00:00.148) 0:00:12.252 *********** 2025-06-02 19:53:13.036327 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:53:13.037161 | orchestrator | 2025-06-02 19:53:13.038121 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 19:53:13.038758 | orchestrator | Monday 02 June 2025 19:53:13 +0000 (0:00:00.137) 0:00:12.390 *********** 2025-06-02 19:53:13.279225 | orchestrator | changed: [testbed-node-3] => { 2025-06-02 19:53:13.280162 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 19:53:13.281510 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:53:13.282506 | orchestrator |  "sdb": { 2025-06-02 19:53:13.283770 | orchestrator |  "osd_lvm_uuid": "93e9f309-356a-50f8-bf6b-26db11b00033" 2025-06-02 19:53:13.287506 | orchestrator |  }, 2025-06-02 19:53:13.290492 | orchestrator |  "sdc": { 2025-06-02 19:53:13.291448 | orchestrator |  "osd_lvm_uuid": "01a13ba8-1f69-5051-bec5-e01e7e9b87e5" 2025-06-02 19:53:13.293795 | orchestrator |  } 2025-06-02 19:53:13.295391 | orchestrator |  }, 2025-06-02 19:53:13.297014 | orchestrator |  "lvm_volumes": [ 2025-06-02 19:53:13.299490 | orchestrator |  { 2025-06-02 19:53:13.300365 | orchestrator |  "data": "osd-block-93e9f309-356a-50f8-bf6b-26db11b00033", 2025-06-02 19:53:13.301754 | orchestrator |  "data_vg": "ceph-93e9f309-356a-50f8-bf6b-26db11b00033" 2025-06-02 19:53:13.302595 | orchestrator |  }, 2025-06-02 19:53:13.303230 | orchestrator |  { 2025-06-02 19:53:13.304346 | orchestrator |  "data": "osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5", 2025-06-02 19:53:13.305144 | orchestrator |  "data_vg": "ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5" 2025-06-02 19:53:13.306507 | orchestrator |  } 2025-06-02 19:53:13.308981 | orchestrator |  ] 2025-06-02 19:53:13.309897 | orchestrator |  } 2025-06-02 19:53:13.310588 | orchestrator | } 2025-06-02 19:53:13.311167 | orchestrator | 2025-06-02 19:53:13.311997 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 19:53:13.312402 | orchestrator | Monday 02 June 2025 19:53:13 +0000 (0:00:00.243) 0:00:12.633 *********** 2025-06-02 19:53:15.704942 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 19:53:15.706894 | orchestrator | 2025-06-02 19:53:15.708534 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 19:53:15.710752 | orchestrator | 2025-06-02 19:53:15.712562 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:53:15.713708 | orchestrator | Monday 02 June 2025 19:53:15 +0000 (0:00:02.423) 0:00:15.056 *********** 2025-06-02 19:53:15.989488 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 19:53:15.990460 | orchestrator | 2025-06-02 19:53:15.991601 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:53:15.993054 | orchestrator | Monday 02 June 2025 19:53:15 +0000 (0:00:00.285) 0:00:15.342 *********** 2025-06-02 19:53:16.255001 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:53:16.256170 | orchestrator | 2025-06-02 19:53:16.258563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:16.260978 | orchestrator | Monday 02 June 2025 19:53:16 +0000 (0:00:00.264) 0:00:15.607 *********** 2025-06-02 19:53:16.606896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 19:53:16.608825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 19:53:16.609019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 19:53:16.610256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 19:53:16.611539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 19:53:16.612859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 19:53:16.614798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 19:53:16.615791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 19:53:16.616928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 19:53:16.620574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 19:53:16.620604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 19:53:16.620615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 19:53:16.620626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 19:53:16.621140 | orchestrator | 2025-06-02 19:53:16.622230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:16.622398 | orchestrator | Monday 02 June 2025 19:53:16 +0000 (0:00:00.353) 0:00:15.960 *********** 2025-06-02 19:53:16.814756 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:16.815898 | orchestrator | 2025-06-02 19:53:16.817287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:16.818662 | orchestrator | Monday 02 June 2025 19:53:16 +0000 (0:00:00.206) 0:00:16.167 *********** 2025-06-02 19:53:17.017104 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:17.019027 | orchestrator | 2025-06-02 19:53:17.020819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:17.021563 | orchestrator | Monday 02 June 2025 19:53:17 +0000 (0:00:00.200) 0:00:16.367 *********** 2025-06-02 19:53:17.196401 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:17.196549 | orchestrator | 2025-06-02 19:53:17.197070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:17.197439 | orchestrator | Monday 02 June 2025 19:53:17 +0000 (0:00:00.178) 0:00:16.546 *********** 2025-06-02 19:53:17.383496 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:17.385452 | orchestrator | 2025-06-02 19:53:17.385570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:17.385678 | orchestrator | Monday 02 June 2025 19:53:17 +0000 (0:00:00.189) 0:00:16.736 *********** 2025-06-02 19:53:17.994572 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:17.996393 | orchestrator | 2025-06-02 19:53:17.997256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:18.001898 | orchestrator | Monday 02 June 2025 19:53:17 +0000 (0:00:00.612) 0:00:17.349 *********** 2025-06-02 19:53:18.211295 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:18.213837 | orchestrator | 2025-06-02 19:53:18.214452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:18.215843 | orchestrator | Monday 02 June 2025 19:53:18 +0000 (0:00:00.212) 0:00:17.562 *********** 2025-06-02 19:53:18.451287 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:18.451540 | orchestrator | 2025-06-02 19:53:18.453580 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:18.453609 | orchestrator | Monday 02 June 2025 19:53:18 +0000 (0:00:00.242) 0:00:17.804 *********** 2025-06-02 19:53:18.746393 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:18.747294 | orchestrator | 2025-06-02 19:53:18.747910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:18.748474 | orchestrator | Monday 02 June 2025 19:53:18 +0000 (0:00:00.295) 0:00:18.100 *********** 2025-06-02 19:53:19.141658 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69) 2025-06-02 19:53:19.143904 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69) 2025-06-02 19:53:19.143940 | orchestrator | 2025-06-02 19:53:19.146135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:19.150735 | orchestrator | Monday 02 June 2025 19:53:19 +0000 (0:00:00.394) 0:00:18.494 *********** 2025-06-02 19:53:19.567305 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee) 2025-06-02 19:53:19.567541 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee) 2025-06-02 19:53:19.570083 | orchestrator | 2025-06-02 19:53:19.571318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:19.573758 | orchestrator | Monday 02 June 2025 19:53:19 +0000 (0:00:00.423) 0:00:18.917 *********** 2025-06-02 19:53:19.992335 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b) 2025-06-02 19:53:19.992526 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b) 2025-06-02 19:53:19.995587 | orchestrator | 2025-06-02 19:53:19.996096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:19.996633 | orchestrator | Monday 02 June 2025 19:53:19 +0000 (0:00:00.426) 0:00:19.344 *********** 2025-06-02 19:53:20.445520 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db) 2025-06-02 19:53:20.445706 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db) 2025-06-02 19:53:20.447734 | orchestrator | 2025-06-02 19:53:20.448632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:20.449560 | orchestrator | Monday 02 June 2025 19:53:20 +0000 (0:00:00.449) 0:00:19.794 *********** 2025-06-02 19:53:20.756382 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:53:20.759653 | orchestrator | 2025-06-02 19:53:20.760008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:20.760591 | orchestrator | Monday 02 June 2025 19:53:20 +0000 (0:00:00.316) 0:00:20.110 *********** 2025-06-02 19:53:21.117006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 19:53:21.117109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 19:53:21.117125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 19:53:21.117138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 19:53:21.117203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 19:53:21.117512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 19:53:21.117747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 19:53:21.117988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 19:53:21.118355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 19:53:21.120714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 19:53:21.121300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 19:53:21.121607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 19:53:21.124865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 19:53:21.125128 | orchestrator | 2025-06-02 19:53:21.125904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:21.125932 | orchestrator | Monday 02 June 2025 19:53:21 +0000 (0:00:00.357) 0:00:20.468 *********** 2025-06-02 19:53:21.320647 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:21.320747 | orchestrator | 2025-06-02 19:53:21.320761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:21.322189 | orchestrator | Monday 02 June 2025 19:53:21 +0000 (0:00:00.203) 0:00:20.672 *********** 2025-06-02 19:53:21.943165 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:21.943853 | orchestrator | 2025-06-02 19:53:21.944765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:21.946407 | orchestrator | Monday 02 June 2025 19:53:21 +0000 (0:00:00.624) 0:00:21.296 *********** 2025-06-02 19:53:22.142823 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:22.143344 | orchestrator | 2025-06-02 19:53:22.144218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:22.144952 | orchestrator | Monday 02 June 2025 19:53:22 +0000 (0:00:00.199) 0:00:21.496 *********** 2025-06-02 19:53:22.335847 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:22.336157 | orchestrator | 2025-06-02 19:53:22.344589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:22.344628 | orchestrator | Monday 02 June 2025 19:53:22 +0000 (0:00:00.191) 0:00:21.687 *********** 2025-06-02 19:53:22.541602 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:22.542107 | orchestrator | 2025-06-02 19:53:22.542791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:22.543663 | orchestrator | Monday 02 June 2025 19:53:22 +0000 (0:00:00.206) 0:00:21.894 *********** 2025-06-02 19:53:22.734788 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:22.735037 | orchestrator | 2025-06-02 19:53:22.735724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:22.736104 | orchestrator | Monday 02 June 2025 19:53:22 +0000 (0:00:00.193) 0:00:22.088 *********** 2025-06-02 19:53:22.933753 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:22.934449 | orchestrator | 2025-06-02 19:53:22.935051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:22.935412 | orchestrator | Monday 02 June 2025 19:53:22 +0000 (0:00:00.199) 0:00:22.288 *********** 2025-06-02 19:53:23.118277 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:23.118556 | orchestrator | 2025-06-02 19:53:23.120230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:23.121445 | orchestrator | Monday 02 June 2025 19:53:23 +0000 (0:00:00.183) 0:00:22.471 *********** 2025-06-02 19:53:23.727310 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 19:53:23.728614 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 19:53:23.731481 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 19:53:23.732305 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 19:53:23.733563 | orchestrator | 2025-06-02 19:53:23.734641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:23.735270 | orchestrator | Monday 02 June 2025 19:53:23 +0000 (0:00:00.609) 0:00:23.080 *********** 2025-06-02 19:53:23.920228 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:23.921814 | orchestrator | 2025-06-02 19:53:23.923965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:23.924001 | orchestrator | Monday 02 June 2025 19:53:23 +0000 (0:00:00.192) 0:00:23.273 *********** 2025-06-02 19:53:24.116270 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:24.117153 | orchestrator | 2025-06-02 19:53:24.118182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:24.118633 | orchestrator | Monday 02 June 2025 19:53:24 +0000 (0:00:00.195) 0:00:23.469 *********** 2025-06-02 19:53:24.304522 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:24.304631 | orchestrator | 2025-06-02 19:53:24.304691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:24.304796 | orchestrator | Monday 02 June 2025 19:53:24 +0000 (0:00:00.185) 0:00:23.654 *********** 2025-06-02 19:53:24.502566 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:24.504663 | orchestrator | 2025-06-02 19:53:24.506391 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 19:53:24.509528 | orchestrator | Monday 02 June 2025 19:53:24 +0000 (0:00:00.201) 0:00:23.856 *********** 2025-06-02 19:53:24.875134 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-02 19:53:24.875291 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-02 19:53:24.876627 | orchestrator | 2025-06-02 19:53:24.880109 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 19:53:24.881196 | orchestrator | Monday 02 June 2025 19:53:24 +0000 (0:00:00.371) 0:00:24.227 *********** 2025-06-02 19:53:25.031259 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:25.032335 | orchestrator | 2025-06-02 19:53:25.034696 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 19:53:25.038816 | orchestrator | Monday 02 June 2025 19:53:25 +0000 (0:00:00.157) 0:00:24.385 *********** 2025-06-02 19:53:25.181859 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:25.182523 | orchestrator | 2025-06-02 19:53:25.183367 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 19:53:25.184628 | orchestrator | Monday 02 June 2025 19:53:25 +0000 (0:00:00.150) 0:00:24.536 *********** 2025-06-02 19:53:25.314837 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:25.316729 | orchestrator | 2025-06-02 19:53:25.319833 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 19:53:25.321057 | orchestrator | Monday 02 June 2025 19:53:25 +0000 (0:00:00.131) 0:00:24.667 *********** 2025-06-02 19:53:25.463579 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:53:25.464182 | orchestrator | 2025-06-02 19:53:25.468925 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 19:53:25.469903 | orchestrator | Monday 02 June 2025 19:53:25 +0000 (0:00:00.148) 0:00:24.816 *********** 2025-06-02 19:53:25.626666 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bdb59653-b88e-5628-a878-3ed7677d43f1'}}) 2025-06-02 19:53:25.626768 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee20b18c-4531-5b6f-acaf-50beaceb257d'}}) 2025-06-02 19:53:25.627484 | orchestrator | 2025-06-02 19:53:25.630820 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 19:53:25.631804 | orchestrator | Monday 02 June 2025 19:53:25 +0000 (0:00:00.162) 0:00:24.979 *********** 2025-06-02 19:53:25.770169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bdb59653-b88e-5628-a878-3ed7677d43f1'}})  2025-06-02 19:53:25.771525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee20b18c-4531-5b6f-acaf-50beaceb257d'}})  2025-06-02 19:53:25.776318 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:25.776934 | orchestrator | 2025-06-02 19:53:25.778193 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 19:53:25.778754 | orchestrator | Monday 02 June 2025 19:53:25 +0000 (0:00:00.144) 0:00:25.124 *********** 2025-06-02 19:53:25.918534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bdb59653-b88e-5628-a878-3ed7677d43f1'}})  2025-06-02 19:53:25.919582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee20b18c-4531-5b6f-acaf-50beaceb257d'}})  2025-06-02 19:53:25.923090 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:25.924237 | orchestrator | 2025-06-02 19:53:25.925659 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 19:53:25.927230 | orchestrator | Monday 02 June 2025 19:53:25 +0000 (0:00:00.147) 0:00:25.271 *********** 2025-06-02 19:53:26.082486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bdb59653-b88e-5628-a878-3ed7677d43f1'}})  2025-06-02 19:53:26.083147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee20b18c-4531-5b6f-acaf-50beaceb257d'}})  2025-06-02 19:53:26.087386 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:26.088446 | orchestrator | 2025-06-02 19:53:26.089149 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 19:53:26.090576 | orchestrator | Monday 02 June 2025 19:53:26 +0000 (0:00:00.162) 0:00:25.433 *********** 2025-06-02 19:53:26.219807 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:53:26.220628 | orchestrator | 2025-06-02 19:53:26.222145 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 19:53:26.225587 | orchestrator | Monday 02 June 2025 19:53:26 +0000 (0:00:00.139) 0:00:25.573 *********** 2025-06-02 19:53:26.363108 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:53:26.364167 | orchestrator | 2025-06-02 19:53:26.367889 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 19:53:26.369141 | orchestrator | Monday 02 June 2025 19:53:26 +0000 (0:00:00.142) 0:00:25.715 *********** 2025-06-02 19:53:26.505685 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:26.506973 | orchestrator | 2025-06-02 19:53:26.507785 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 19:53:26.511158 | orchestrator | Monday 02 June 2025 19:53:26 +0000 (0:00:00.143) 0:00:25.859 *********** 2025-06-02 19:53:26.813311 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:26.813812 | orchestrator | 2025-06-02 19:53:26.814816 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 19:53:26.817568 | orchestrator | Monday 02 June 2025 19:53:26 +0000 (0:00:00.307) 0:00:26.166 *********** 2025-06-02 19:53:26.952297 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:26.953703 | orchestrator | 2025-06-02 19:53:26.954505 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 19:53:26.956785 | orchestrator | Monday 02 June 2025 19:53:26 +0000 (0:00:00.139) 0:00:26.305 *********** 2025-06-02 19:53:27.087800 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:53:27.089968 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:53:27.094263 | orchestrator |  "sdb": { 2025-06-02 19:53:27.095701 | orchestrator |  "osd_lvm_uuid": "bdb59653-b88e-5628-a878-3ed7677d43f1" 2025-06-02 19:53:27.097378 | orchestrator |  }, 2025-06-02 19:53:27.099333 | orchestrator |  "sdc": { 2025-06-02 19:53:27.100572 | orchestrator |  "osd_lvm_uuid": "ee20b18c-4531-5b6f-acaf-50beaceb257d" 2025-06-02 19:53:27.101352 | orchestrator |  } 2025-06-02 19:53:27.102069 | orchestrator |  } 2025-06-02 19:53:27.103149 | orchestrator | } 2025-06-02 19:53:27.103755 | orchestrator | 2025-06-02 19:53:27.104291 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 19:53:27.104878 | orchestrator | Monday 02 June 2025 19:53:27 +0000 (0:00:00.134) 0:00:26.440 *********** 2025-06-02 19:53:27.228096 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:27.229004 | orchestrator | 2025-06-02 19:53:27.230371 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 19:53:27.231253 | orchestrator | Monday 02 June 2025 19:53:27 +0000 (0:00:00.141) 0:00:26.581 *********** 2025-06-02 19:53:27.361028 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:27.361543 | orchestrator | 2025-06-02 19:53:27.361985 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 19:53:27.363019 | orchestrator | Monday 02 June 2025 19:53:27 +0000 (0:00:00.132) 0:00:26.713 *********** 2025-06-02 19:53:27.497878 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:53:27.499289 | orchestrator | 2025-06-02 19:53:27.501048 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 19:53:27.505278 | orchestrator | Monday 02 June 2025 19:53:27 +0000 (0:00:00.137) 0:00:26.851 *********** 2025-06-02 19:53:27.694394 | orchestrator | changed: [testbed-node-4] => { 2025-06-02 19:53:27.695591 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 19:53:27.697526 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:53:27.701302 | orchestrator |  "sdb": { 2025-06-02 19:53:27.702165 | orchestrator |  "osd_lvm_uuid": "bdb59653-b88e-5628-a878-3ed7677d43f1" 2025-06-02 19:53:27.702743 | orchestrator |  }, 2025-06-02 19:53:27.704220 | orchestrator |  "sdc": { 2025-06-02 19:53:27.706189 | orchestrator |  "osd_lvm_uuid": "ee20b18c-4531-5b6f-acaf-50beaceb257d" 2025-06-02 19:53:27.707710 | orchestrator |  } 2025-06-02 19:53:27.708365 | orchestrator |  }, 2025-06-02 19:53:27.708672 | orchestrator |  "lvm_volumes": [ 2025-06-02 19:53:27.709076 | orchestrator |  { 2025-06-02 19:53:27.710539 | orchestrator |  "data": "osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1", 2025-06-02 19:53:27.714296 | orchestrator |  "data_vg": "ceph-bdb59653-b88e-5628-a878-3ed7677d43f1" 2025-06-02 19:53:27.714401 | orchestrator |  }, 2025-06-02 19:53:27.714666 | orchestrator |  { 2025-06-02 19:53:27.716189 | orchestrator |  "data": "osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d", 2025-06-02 19:53:27.716274 | orchestrator |  "data_vg": "ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d" 2025-06-02 19:53:27.717010 | orchestrator |  } 2025-06-02 19:53:27.717223 | orchestrator |  ] 2025-06-02 19:53:27.718515 | orchestrator |  } 2025-06-02 19:53:27.718641 | orchestrator | } 2025-06-02 19:53:27.719894 | orchestrator | 2025-06-02 19:53:27.720078 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 19:53:27.721205 | orchestrator | Monday 02 June 2025 19:53:27 +0000 (0:00:00.196) 0:00:27.048 *********** 2025-06-02 19:53:28.731771 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 19:53:28.733396 | orchestrator | 2025-06-02 19:53:28.734413 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 19:53:28.736706 | orchestrator | 2025-06-02 19:53:28.738723 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:53:28.739880 | orchestrator | Monday 02 June 2025 19:53:28 +0000 (0:00:01.038) 0:00:28.086 *********** 2025-06-02 19:53:29.217149 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 19:53:29.219483 | orchestrator | 2025-06-02 19:53:29.219595 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:53:29.219965 | orchestrator | Monday 02 June 2025 19:53:29 +0000 (0:00:00.482) 0:00:28.569 *********** 2025-06-02 19:53:29.861627 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:53:29.865497 | orchestrator | 2025-06-02 19:53:29.866619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:29.867753 | orchestrator | Monday 02 June 2025 19:53:29 +0000 (0:00:00.644) 0:00:29.213 *********** 2025-06-02 19:53:30.232737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 19:53:30.232904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 19:53:30.236610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 19:53:30.236779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 19:53:30.238725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 19:53:30.240195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 19:53:30.241095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 19:53:30.241944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 19:53:30.242510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 19:53:30.243267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 19:53:30.244042 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 19:53:30.244489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 19:53:30.245041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 19:53:30.245594 | orchestrator | 2025-06-02 19:53:30.246086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:30.246522 | orchestrator | Monday 02 June 2025 19:53:30 +0000 (0:00:00.370) 0:00:29.584 *********** 2025-06-02 19:53:30.444578 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:30.445263 | orchestrator | 2025-06-02 19:53:30.446922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:30.447778 | orchestrator | Monday 02 June 2025 19:53:30 +0000 (0:00:00.209) 0:00:29.793 *********** 2025-06-02 19:53:30.641649 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:30.641722 | orchestrator | 2025-06-02 19:53:30.643492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:30.644753 | orchestrator | Monday 02 June 2025 19:53:30 +0000 (0:00:00.199) 0:00:29.992 *********** 2025-06-02 19:53:30.837840 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:30.838832 | orchestrator | 2025-06-02 19:53:30.842550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:30.842649 | orchestrator | Monday 02 June 2025 19:53:30 +0000 (0:00:00.197) 0:00:30.189 *********** 2025-06-02 19:53:31.034866 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:31.034959 | orchestrator | 2025-06-02 19:53:31.037714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:31.037738 | orchestrator | Monday 02 June 2025 19:53:31 +0000 (0:00:00.195) 0:00:30.385 *********** 2025-06-02 19:53:31.225649 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:31.226610 | orchestrator | 2025-06-02 19:53:31.228276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:31.229401 | orchestrator | Monday 02 June 2025 19:53:31 +0000 (0:00:00.193) 0:00:30.578 *********** 2025-06-02 19:53:31.449826 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:31.450995 | orchestrator | 2025-06-02 19:53:31.451961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:31.452342 | orchestrator | Monday 02 June 2025 19:53:31 +0000 (0:00:00.222) 0:00:30.801 *********** 2025-06-02 19:53:31.643317 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:31.643747 | orchestrator | 2025-06-02 19:53:31.644279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:31.644882 | orchestrator | Monday 02 June 2025 19:53:31 +0000 (0:00:00.195) 0:00:30.997 *********** 2025-06-02 19:53:31.857376 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:31.857987 | orchestrator | 2025-06-02 19:53:31.859200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:31.860355 | orchestrator | Monday 02 June 2025 19:53:31 +0000 (0:00:00.213) 0:00:31.210 *********** 2025-06-02 19:53:32.502369 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25) 2025-06-02 19:53:32.502533 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25) 2025-06-02 19:53:32.502551 | orchestrator | 2025-06-02 19:53:32.503203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:32.504521 | orchestrator | Monday 02 June 2025 19:53:32 +0000 (0:00:00.640) 0:00:31.850 *********** 2025-06-02 19:53:33.369615 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b) 2025-06-02 19:53:33.369784 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b) 2025-06-02 19:53:33.370155 | orchestrator | 2025-06-02 19:53:33.370550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:33.370975 | orchestrator | Monday 02 June 2025 19:53:33 +0000 (0:00:00.873) 0:00:32.724 *********** 2025-06-02 19:53:33.774928 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8) 2025-06-02 19:53:33.775152 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8) 2025-06-02 19:53:33.776622 | orchestrator | 2025-06-02 19:53:33.777446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:33.778288 | orchestrator | Monday 02 June 2025 19:53:33 +0000 (0:00:00.404) 0:00:33.129 *********** 2025-06-02 19:53:34.215217 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb) 2025-06-02 19:53:34.216113 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb) 2025-06-02 19:53:34.216952 | orchestrator | 2025-06-02 19:53:34.219530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:53:34.219586 | orchestrator | Monday 02 June 2025 19:53:34 +0000 (0:00:00.439) 0:00:33.568 *********** 2025-06-02 19:53:34.538363 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:53:34.540171 | orchestrator | 2025-06-02 19:53:34.542508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:34.544053 | orchestrator | Monday 02 June 2025 19:53:34 +0000 (0:00:00.321) 0:00:33.890 *********** 2025-06-02 19:53:34.967999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 19:53:34.969115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 19:53:34.970048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 19:53:34.972581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 19:53:34.972599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 19:53:34.972907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 19:53:34.974561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 19:53:34.975497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 19:53:34.976320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 19:53:34.977074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 19:53:34.977877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 19:53:34.978807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 19:53:34.979510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 19:53:34.980002 | orchestrator | 2025-06-02 19:53:34.980844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:34.981827 | orchestrator | Monday 02 June 2025 19:53:34 +0000 (0:00:00.431) 0:00:34.321 *********** 2025-06-02 19:53:35.172008 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:35.172670 | orchestrator | 2025-06-02 19:53:35.174106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:35.174396 | orchestrator | Monday 02 June 2025 19:53:35 +0000 (0:00:00.203) 0:00:34.525 *********** 2025-06-02 19:53:35.364645 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:35.365765 | orchestrator | 2025-06-02 19:53:35.366558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:35.367534 | orchestrator | Monday 02 June 2025 19:53:35 +0000 (0:00:00.192) 0:00:34.718 *********** 2025-06-02 19:53:35.565848 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:35.566257 | orchestrator | 2025-06-02 19:53:35.567099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:35.569750 | orchestrator | Monday 02 June 2025 19:53:35 +0000 (0:00:00.199) 0:00:34.917 *********** 2025-06-02 19:53:35.756202 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:35.756680 | orchestrator | 2025-06-02 19:53:35.758201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:35.760088 | orchestrator | Monday 02 June 2025 19:53:35 +0000 (0:00:00.191) 0:00:35.109 *********** 2025-06-02 19:53:35.952151 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:35.952414 | orchestrator | 2025-06-02 19:53:35.953803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:35.957522 | orchestrator | Monday 02 June 2025 19:53:35 +0000 (0:00:00.196) 0:00:35.305 *********** 2025-06-02 19:53:36.606796 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:36.606921 | orchestrator | 2025-06-02 19:53:36.607468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:36.607862 | orchestrator | Monday 02 June 2025 19:53:36 +0000 (0:00:00.655) 0:00:35.961 *********** 2025-06-02 19:53:36.803006 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:36.804506 | orchestrator | 2025-06-02 19:53:36.805582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:36.807101 | orchestrator | Monday 02 June 2025 19:53:36 +0000 (0:00:00.195) 0:00:36.156 *********** 2025-06-02 19:53:37.002897 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:37.003108 | orchestrator | 2025-06-02 19:53:37.004106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:37.005134 | orchestrator | Monday 02 June 2025 19:53:36 +0000 (0:00:00.199) 0:00:36.355 *********** 2025-06-02 19:53:37.636022 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 19:53:37.637470 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 19:53:37.637523 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 19:53:37.637536 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 19:53:37.638285 | orchestrator | 2025-06-02 19:53:37.638821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:37.639388 | orchestrator | Monday 02 June 2025 19:53:37 +0000 (0:00:00.632) 0:00:36.987 *********** 2025-06-02 19:53:37.840079 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:37.840901 | orchestrator | 2025-06-02 19:53:37.841687 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:37.842299 | orchestrator | Monday 02 June 2025 19:53:37 +0000 (0:00:00.206) 0:00:37.193 *********** 2025-06-02 19:53:38.035373 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:38.035978 | orchestrator | 2025-06-02 19:53:38.036706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:38.037833 | orchestrator | Monday 02 June 2025 19:53:38 +0000 (0:00:00.194) 0:00:37.388 *********** 2025-06-02 19:53:38.236935 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:38.237145 | orchestrator | 2025-06-02 19:53:38.237558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:53:38.241734 | orchestrator | Monday 02 June 2025 19:53:38 +0000 (0:00:00.202) 0:00:37.591 *********** 2025-06-02 19:53:38.442840 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:38.443202 | orchestrator | 2025-06-02 19:53:38.444286 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 19:53:38.445122 | orchestrator | Monday 02 June 2025 19:53:38 +0000 (0:00:00.204) 0:00:37.796 *********** 2025-06-02 19:53:38.628301 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-02 19:53:38.629483 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-02 19:53:38.631730 | orchestrator | 2025-06-02 19:53:38.631827 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 19:53:38.631946 | orchestrator | Monday 02 June 2025 19:53:38 +0000 (0:00:00.184) 0:00:37.981 *********** 2025-06-02 19:53:38.813598 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:38.816132 | orchestrator | 2025-06-02 19:53:38.817926 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 19:53:38.821020 | orchestrator | Monday 02 June 2025 19:53:38 +0000 (0:00:00.185) 0:00:38.166 *********** 2025-06-02 19:53:38.948822 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:38.952106 | orchestrator | 2025-06-02 19:53:38.953048 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 19:53:38.953770 | orchestrator | Monday 02 June 2025 19:53:38 +0000 (0:00:00.134) 0:00:38.300 *********** 2025-06-02 19:53:39.086499 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:39.087731 | orchestrator | 2025-06-02 19:53:39.090665 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 19:53:39.092864 | orchestrator | Monday 02 June 2025 19:53:39 +0000 (0:00:00.137) 0:00:38.438 *********** 2025-06-02 19:53:39.500039 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:53:39.500912 | orchestrator | 2025-06-02 19:53:39.501865 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 19:53:39.502983 | orchestrator | Monday 02 June 2025 19:53:39 +0000 (0:00:00.414) 0:00:38.853 *********** 2025-06-02 19:53:39.686933 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '86208513-8fbd-535b-80fd-915c228be133'}}) 2025-06-02 19:53:39.688173 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed769c7c-5756-52eb-9583-a607cefce370'}}) 2025-06-02 19:53:39.688977 | orchestrator | 2025-06-02 19:53:39.690180 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 19:53:39.691225 | orchestrator | Monday 02 June 2025 19:53:39 +0000 (0:00:00.185) 0:00:39.038 *********** 2025-06-02 19:53:39.836143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '86208513-8fbd-535b-80fd-915c228be133'}})  2025-06-02 19:53:39.841085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed769c7c-5756-52eb-9583-a607cefce370'}})  2025-06-02 19:53:39.841188 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:39.841675 | orchestrator | 2025-06-02 19:53:39.842818 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 19:53:39.843638 | orchestrator | Monday 02 June 2025 19:53:39 +0000 (0:00:00.151) 0:00:39.189 *********** 2025-06-02 19:53:40.000384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '86208513-8fbd-535b-80fd-915c228be133'}})  2025-06-02 19:53:40.001574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed769c7c-5756-52eb-9583-a607cefce370'}})  2025-06-02 19:53:40.002179 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:40.003241 | orchestrator | 2025-06-02 19:53:40.004127 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 19:53:40.005216 | orchestrator | Monday 02 June 2025 19:53:39 +0000 (0:00:00.164) 0:00:39.354 *********** 2025-06-02 19:53:40.150144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '86208513-8fbd-535b-80fd-915c228be133'}})  2025-06-02 19:53:40.151131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed769c7c-5756-52eb-9583-a607cefce370'}})  2025-06-02 19:53:40.151532 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:40.153519 | orchestrator | 2025-06-02 19:53:40.153545 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 19:53:40.153570 | orchestrator | Monday 02 June 2025 19:53:40 +0000 (0:00:00.148) 0:00:39.502 *********** 2025-06-02 19:53:40.281856 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:53:40.281936 | orchestrator | 2025-06-02 19:53:40.282855 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 19:53:40.283892 | orchestrator | Monday 02 June 2025 19:53:40 +0000 (0:00:00.132) 0:00:39.635 *********** 2025-06-02 19:53:40.432685 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:53:40.433328 | orchestrator | 2025-06-02 19:53:40.435084 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 19:53:40.436176 | orchestrator | Monday 02 June 2025 19:53:40 +0000 (0:00:00.151) 0:00:39.786 *********** 2025-06-02 19:53:40.577400 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:40.577546 | orchestrator | 2025-06-02 19:53:40.577651 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 19:53:40.578369 | orchestrator | Monday 02 June 2025 19:53:40 +0000 (0:00:00.139) 0:00:39.926 *********** 2025-06-02 19:53:40.715811 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:40.716024 | orchestrator | 2025-06-02 19:53:40.717624 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 19:53:40.720401 | orchestrator | Monday 02 June 2025 19:53:40 +0000 (0:00:00.142) 0:00:40.068 *********** 2025-06-02 19:53:40.851266 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:40.851992 | orchestrator | 2025-06-02 19:53:40.852618 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 19:53:40.853806 | orchestrator | Monday 02 June 2025 19:53:40 +0000 (0:00:00.136) 0:00:40.205 *********** 2025-06-02 19:53:40.998118 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:53:41.000167 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:53:41.000791 | orchestrator |  "sdb": { 2025-06-02 19:53:41.002401 | orchestrator |  "osd_lvm_uuid": "86208513-8fbd-535b-80fd-915c228be133" 2025-06-02 19:53:41.003655 | orchestrator |  }, 2025-06-02 19:53:41.004019 | orchestrator |  "sdc": { 2025-06-02 19:53:41.005393 | orchestrator |  "osd_lvm_uuid": "ed769c7c-5756-52eb-9583-a607cefce370" 2025-06-02 19:53:41.005950 | orchestrator |  } 2025-06-02 19:53:41.006405 | orchestrator |  } 2025-06-02 19:53:41.007474 | orchestrator | } 2025-06-02 19:53:41.007582 | orchestrator | 2025-06-02 19:53:41.008657 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 19:53:41.008879 | orchestrator | Monday 02 June 2025 19:53:40 +0000 (0:00:00.145) 0:00:40.351 *********** 2025-06-02 19:53:41.127582 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:41.129243 | orchestrator | 2025-06-02 19:53:41.130637 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 19:53:41.131819 | orchestrator | Monday 02 June 2025 19:53:41 +0000 (0:00:00.128) 0:00:40.480 *********** 2025-06-02 19:53:41.445717 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:41.445887 | orchestrator | 2025-06-02 19:53:41.446588 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 19:53:41.451104 | orchestrator | Monday 02 June 2025 19:53:41 +0000 (0:00:00.318) 0:00:40.798 *********** 2025-06-02 19:53:41.564856 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:53:41.565209 | orchestrator | 2025-06-02 19:53:41.567208 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 19:53:41.569388 | orchestrator | Monday 02 June 2025 19:53:41 +0000 (0:00:00.119) 0:00:40.917 *********** 2025-06-02 19:53:41.771363 | orchestrator | changed: [testbed-node-5] => { 2025-06-02 19:53:41.773208 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 19:53:41.774135 | orchestrator |  "ceph_osd_devices": { 2025-06-02 19:53:41.775172 | orchestrator |  "sdb": { 2025-06-02 19:53:41.779047 | orchestrator |  "osd_lvm_uuid": "86208513-8fbd-535b-80fd-915c228be133" 2025-06-02 19:53:41.779741 | orchestrator |  }, 2025-06-02 19:53:41.780890 | orchestrator |  "sdc": { 2025-06-02 19:53:41.781855 | orchestrator |  "osd_lvm_uuid": "ed769c7c-5756-52eb-9583-a607cefce370" 2025-06-02 19:53:41.782645 | orchestrator |  } 2025-06-02 19:53:41.783762 | orchestrator |  }, 2025-06-02 19:53:41.784347 | orchestrator |  "lvm_volumes": [ 2025-06-02 19:53:41.785256 | orchestrator |  { 2025-06-02 19:53:41.786494 | orchestrator |  "data": "osd-block-86208513-8fbd-535b-80fd-915c228be133", 2025-06-02 19:53:41.787689 | orchestrator |  "data_vg": "ceph-86208513-8fbd-535b-80fd-915c228be133" 2025-06-02 19:53:41.787984 | orchestrator |  }, 2025-06-02 19:53:41.789545 | orchestrator |  { 2025-06-02 19:53:41.790852 | orchestrator |  "data": "osd-block-ed769c7c-5756-52eb-9583-a607cefce370", 2025-06-02 19:53:41.791036 | orchestrator |  "data_vg": "ceph-ed769c7c-5756-52eb-9583-a607cefce370" 2025-06-02 19:53:41.792358 | orchestrator |  } 2025-06-02 19:53:41.793182 | orchestrator |  ] 2025-06-02 19:53:41.794394 | orchestrator |  } 2025-06-02 19:53:41.794765 | orchestrator | } 2025-06-02 19:53:41.795557 | orchestrator | 2025-06-02 19:53:41.796900 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 19:53:41.797140 | orchestrator | Monday 02 June 2025 19:53:41 +0000 (0:00:00.206) 0:00:41.124 *********** 2025-06-02 19:53:42.733994 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 19:53:42.734977 | orchestrator | 2025-06-02 19:53:42.736173 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:53:42.736253 | orchestrator | 2025-06-02 19:53:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:53:42.736557 | orchestrator | 2025-06-02 19:53:42 | INFO  | Please wait and do not abort execution. 2025-06-02 19:53:42.737878 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 19:53:42.739228 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 19:53:42.740520 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 19:53:42.740901 | orchestrator | 2025-06-02 19:53:42.741741 | orchestrator | 2025-06-02 19:53:42.742091 | orchestrator | 2025-06-02 19:53:42.742942 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:53:42.743694 | orchestrator | Monday 02 June 2025 19:53:42 +0000 (0:00:00.960) 0:00:42.085 *********** 2025-06-02 19:53:42.744221 | orchestrator | =============================================================================== 2025-06-02 19:53:42.744627 | orchestrator | Write configuration file ------------------------------------------------ 4.42s 2025-06-02 19:53:42.745396 | orchestrator | Add known partitions to the list of available block devices ------------- 1.24s 2025-06-02 19:53:42.746881 | orchestrator | Get initial list of available block devices ----------------------------- 1.13s 2025-06-02 19:53:42.747916 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2025-06-02 19:53:42.748696 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.02s 2025-06-02 19:53:42.749485 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2025-06-02 19:53:42.750471 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2025-06-02 19:53:42.751192 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.74s 2025-06-02 19:53:42.752294 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.69s 2025-06-02 19:53:42.753021 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.66s 2025-06-02 19:53:42.753766 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-06-02 19:53:42.754720 | orchestrator | Print configuration data ------------------------------------------------ 0.65s 2025-06-02 19:53:42.755542 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-06-02 19:53:42.756592 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-06-02 19:53:42.756720 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-06-02 19:53:42.757590 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-06-02 19:53:42.758218 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-06-02 19:53:42.758931 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-06-02 19:53:42.759543 | orchestrator | Set WAL devices config data --------------------------------------------- 0.61s 2025-06-02 19:53:42.760067 | orchestrator | Print DB devices -------------------------------------------------------- 0.60s 2025-06-02 19:53:55.653804 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:53:55.653924 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:53:55.653941 | orchestrator | Registering Redlock._release_script 2025-06-02 19:53:55.705669 | orchestrator | 2025-06-02 19:53:55 | INFO  | Task 5cd8700c-c41d-4546-8fff-3a646e423ea4 (sync inventory) is running in background. Output coming soon. 2025-06-02 19:54:13.857081 | orchestrator | 2025-06-02 19:53:56 | INFO  | Starting group_vars file reorganization 2025-06-02 19:54:13.857167 | orchestrator | 2025-06-02 19:53:56 | INFO  | Moved 0 file(s) to their respective directories 2025-06-02 19:54:13.857174 | orchestrator | 2025-06-02 19:53:56 | INFO  | Group_vars file reorganization completed 2025-06-02 19:54:13.857178 | orchestrator | 2025-06-02 19:53:58 | INFO  | Starting variable preparation from inventory 2025-06-02 19:54:13.857183 | orchestrator | 2025-06-02 19:54:00 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-02 19:54:13.857187 | orchestrator | 2025-06-02 19:54:00 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-02 19:54:13.857208 | orchestrator | 2025-06-02 19:54:00 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-02 19:54:13.857213 | orchestrator | 2025-06-02 19:54:00 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-02 19:54:13.857217 | orchestrator | 2025-06-02 19:54:00 | INFO  | Variable preparation completed: 2025-06-02 19:54:13.857221 | orchestrator | 2025-06-02 19:54:01 | INFO  | Starting inventory overwrite handling 2025-06-02 19:54:13.857225 | orchestrator | 2025-06-02 19:54:01 | INFO  | Handling group overwrites in 99-overwrite 2025-06-02 19:54:13.857229 | orchestrator | 2025-06-02 19:54:01 | INFO  | Removing group frr:children from 60-generic 2025-06-02 19:54:13.857232 | orchestrator | 2025-06-02 19:54:01 | INFO  | Removing group storage:children from 50-kolla 2025-06-02 19:54:13.857236 | orchestrator | 2025-06-02 19:54:01 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-02 19:54:13.857246 | orchestrator | 2025-06-02 19:54:01 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-02 19:54:13.857250 | orchestrator | 2025-06-02 19:54:01 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-02 19:54:13.857254 | orchestrator | 2025-06-02 19:54:01 | INFO  | Handling group overwrites in 20-roles 2025-06-02 19:54:13.857258 | orchestrator | 2025-06-02 19:54:01 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-02 19:54:13.857261 | orchestrator | 2025-06-02 19:54:01 | INFO  | Removed 6 group(s) in total 2025-06-02 19:54:13.857265 | orchestrator | 2025-06-02 19:54:01 | INFO  | Inventory overwrite handling completed 2025-06-02 19:54:13.857269 | orchestrator | 2025-06-02 19:54:02 | INFO  | Starting merge of inventory files 2025-06-02 19:54:13.857272 | orchestrator | 2025-06-02 19:54:02 | INFO  | Inventory files merged successfully 2025-06-02 19:54:13.857276 | orchestrator | 2025-06-02 19:54:06 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-02 19:54:13.857280 | orchestrator | 2025-06-02 19:54:12 | INFO  | Successfully wrote ClusterShell configuration 2025-06-02 19:54:13.857284 | orchestrator | [master f25ff76] 2025-06-02-19-54 2025-06-02 19:54:13.857289 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-06-02 19:54:15.882411 | orchestrator | 2025-06-02 19:54:15 | INFO  | Task d595b9a4-ff51-4c94-9006-484ff55c56ca (ceph-create-lvm-devices) was prepared for execution. 2025-06-02 19:54:15.882547 | orchestrator | 2025-06-02 19:54:15 | INFO  | It takes a moment until task d595b9a4-ff51-4c94-9006-484ff55c56ca (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-02 19:54:19.978634 | orchestrator | 2025-06-02 19:54:19.980231 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 19:54:19.980286 | orchestrator | 2025-06-02 19:54:19.981980 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:54:19.982769 | orchestrator | Monday 02 June 2025 19:54:19 +0000 (0:00:00.278) 0:00:00.278 *********** 2025-06-02 19:54:20.208507 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 19:54:20.208808 | orchestrator | 2025-06-02 19:54:20.209629 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:54:20.210460 | orchestrator | Monday 02 June 2025 19:54:20 +0000 (0:00:00.231) 0:00:00.509 *********** 2025-06-02 19:54:20.415534 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:54:20.415633 | orchestrator | 2025-06-02 19:54:20.415647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:20.418874 | orchestrator | Monday 02 June 2025 19:54:20 +0000 (0:00:00.206) 0:00:00.716 *********** 2025-06-02 19:54:20.780207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 19:54:20.780516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 19:54:20.781717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 19:54:20.782236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 19:54:20.784214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 19:54:20.785143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 19:54:20.786347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 19:54:20.787453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 19:54:20.788492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 19:54:20.789378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 19:54:20.790493 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 19:54:20.791501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 19:54:20.792101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 19:54:20.792860 | orchestrator | 2025-06-02 19:54:20.793859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:20.794072 | orchestrator | Monday 02 June 2025 19:54:20 +0000 (0:00:00.365) 0:00:01.082 *********** 2025-06-02 19:54:21.154372 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:21.155583 | orchestrator | 2025-06-02 19:54:21.157266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:21.158880 | orchestrator | Monday 02 June 2025 19:54:21 +0000 (0:00:00.371) 0:00:01.453 *********** 2025-06-02 19:54:21.331031 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:21.331202 | orchestrator | 2025-06-02 19:54:21.332387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:21.333350 | orchestrator | Monday 02 June 2025 19:54:21 +0000 (0:00:00.177) 0:00:01.631 *********** 2025-06-02 19:54:21.512518 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:21.512692 | orchestrator | 2025-06-02 19:54:21.513482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:21.513988 | orchestrator | Monday 02 June 2025 19:54:21 +0000 (0:00:00.181) 0:00:01.812 *********** 2025-06-02 19:54:21.675471 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:21.675677 | orchestrator | 2025-06-02 19:54:21.676739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:21.677062 | orchestrator | Monday 02 June 2025 19:54:21 +0000 (0:00:00.164) 0:00:01.977 *********** 2025-06-02 19:54:21.847088 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:21.847217 | orchestrator | 2025-06-02 19:54:21.847416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:21.848270 | orchestrator | Monday 02 June 2025 19:54:21 +0000 (0:00:00.170) 0:00:02.148 *********** 2025-06-02 19:54:22.016294 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:22.016906 | orchestrator | 2025-06-02 19:54:22.019989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:22.020287 | orchestrator | Monday 02 June 2025 19:54:22 +0000 (0:00:00.168) 0:00:02.316 *********** 2025-06-02 19:54:22.198160 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:22.198263 | orchestrator | 2025-06-02 19:54:22.198278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:22.198291 | orchestrator | Monday 02 June 2025 19:54:22 +0000 (0:00:00.182) 0:00:02.499 *********** 2025-06-02 19:54:22.363398 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:22.363833 | orchestrator | 2025-06-02 19:54:22.364383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:22.365376 | orchestrator | Monday 02 June 2025 19:54:22 +0000 (0:00:00.166) 0:00:02.665 *********** 2025-06-02 19:54:22.728765 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab) 2025-06-02 19:54:22.728926 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab) 2025-06-02 19:54:22.729060 | orchestrator | 2025-06-02 19:54:22.729483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:22.731190 | orchestrator | Monday 02 June 2025 19:54:22 +0000 (0:00:00.365) 0:00:03.030 *********** 2025-06-02 19:54:23.091669 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba) 2025-06-02 19:54:23.092772 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba) 2025-06-02 19:54:23.093377 | orchestrator | 2025-06-02 19:54:23.094205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:23.094819 | orchestrator | Monday 02 June 2025 19:54:23 +0000 (0:00:00.361) 0:00:03.392 *********** 2025-06-02 19:54:23.623775 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40) 2025-06-02 19:54:23.624198 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40) 2025-06-02 19:54:23.624996 | orchestrator | 2025-06-02 19:54:23.625801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:23.626629 | orchestrator | Monday 02 June 2025 19:54:23 +0000 (0:00:00.533) 0:00:03.925 *********** 2025-06-02 19:54:24.155575 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f) 2025-06-02 19:54:24.155998 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f) 2025-06-02 19:54:24.157266 | orchestrator | 2025-06-02 19:54:24.157290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:24.157948 | orchestrator | Monday 02 June 2025 19:54:24 +0000 (0:00:00.530) 0:00:04.456 *********** 2025-06-02 19:54:24.706835 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:54:24.707654 | orchestrator | 2025-06-02 19:54:24.708517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:24.709386 | orchestrator | Monday 02 June 2025 19:54:24 +0000 (0:00:00.553) 0:00:05.009 *********** 2025-06-02 19:54:25.105852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 19:54:25.107009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 19:54:25.108406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 19:54:25.108898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 19:54:25.109877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 19:54:25.110528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 19:54:25.110715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 19:54:25.111475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 19:54:25.111881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 19:54:25.112258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 19:54:25.112281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 19:54:25.112757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 19:54:25.113156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 19:54:25.113378 | orchestrator | 2025-06-02 19:54:25.113857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:25.114083 | orchestrator | Monday 02 June 2025 19:54:25 +0000 (0:00:00.395) 0:00:05.405 *********** 2025-06-02 19:54:25.301939 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:25.302896 | orchestrator | 2025-06-02 19:54:25.303725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:25.305519 | orchestrator | Monday 02 June 2025 19:54:25 +0000 (0:00:00.197) 0:00:05.602 *********** 2025-06-02 19:54:25.499218 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:25.500046 | orchestrator | 2025-06-02 19:54:25.501269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:25.501835 | orchestrator | Monday 02 June 2025 19:54:25 +0000 (0:00:00.198) 0:00:05.800 *********** 2025-06-02 19:54:25.695649 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:25.696345 | orchestrator | 2025-06-02 19:54:25.697280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:25.697294 | orchestrator | Monday 02 June 2025 19:54:25 +0000 (0:00:00.196) 0:00:05.997 *********** 2025-06-02 19:54:25.903161 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:25.903270 | orchestrator | 2025-06-02 19:54:25.903287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:25.903416 | orchestrator | Monday 02 June 2025 19:54:25 +0000 (0:00:00.206) 0:00:06.204 *********** 2025-06-02 19:54:26.110911 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:26.111029 | orchestrator | 2025-06-02 19:54:26.111051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:26.111071 | orchestrator | Monday 02 June 2025 19:54:26 +0000 (0:00:00.206) 0:00:06.410 *********** 2025-06-02 19:54:26.306792 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:26.307121 | orchestrator | 2025-06-02 19:54:26.307714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:26.308342 | orchestrator | Monday 02 June 2025 19:54:26 +0000 (0:00:00.197) 0:00:06.608 *********** 2025-06-02 19:54:26.506014 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:26.507688 | orchestrator | 2025-06-02 19:54:26.508747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:26.509883 | orchestrator | Monday 02 June 2025 19:54:26 +0000 (0:00:00.198) 0:00:06.806 *********** 2025-06-02 19:54:26.736780 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:26.737189 | orchestrator | 2025-06-02 19:54:26.737826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:26.738579 | orchestrator | Monday 02 June 2025 19:54:26 +0000 (0:00:00.231) 0:00:07.037 *********** 2025-06-02 19:54:27.778573 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 19:54:27.778692 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 19:54:27.779514 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 19:54:27.780621 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 19:54:27.780952 | orchestrator | 2025-06-02 19:54:27.781493 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:27.782296 | orchestrator | Monday 02 June 2025 19:54:27 +0000 (0:00:01.040) 0:00:08.078 *********** 2025-06-02 19:54:27.977086 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:27.977569 | orchestrator | 2025-06-02 19:54:27.979021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:27.980047 | orchestrator | Monday 02 June 2025 19:54:27 +0000 (0:00:00.200) 0:00:08.278 *********** 2025-06-02 19:54:28.170718 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:28.170824 | orchestrator | 2025-06-02 19:54:28.170895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:28.171193 | orchestrator | Monday 02 June 2025 19:54:28 +0000 (0:00:00.192) 0:00:08.471 *********** 2025-06-02 19:54:28.357374 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:28.357542 | orchestrator | 2025-06-02 19:54:28.357671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:28.357972 | orchestrator | Monday 02 June 2025 19:54:28 +0000 (0:00:00.187) 0:00:08.659 *********** 2025-06-02 19:54:28.562579 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:28.562770 | orchestrator | 2025-06-02 19:54:28.563541 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 19:54:28.564256 | orchestrator | Monday 02 June 2025 19:54:28 +0000 (0:00:00.204) 0:00:08.863 *********** 2025-06-02 19:54:28.699695 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:28.701504 | orchestrator | 2025-06-02 19:54:28.701530 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 19:54:28.701537 | orchestrator | Monday 02 June 2025 19:54:28 +0000 (0:00:00.136) 0:00:09.000 *********** 2025-06-02 19:54:28.893012 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93e9f309-356a-50f8-bf6b-26db11b00033'}}) 2025-06-02 19:54:28.893129 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '01a13ba8-1f69-5051-bec5-e01e7e9b87e5'}}) 2025-06-02 19:54:28.893233 | orchestrator | 2025-06-02 19:54:28.893690 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 19:54:28.894275 | orchestrator | Monday 02 June 2025 19:54:28 +0000 (0:00:00.192) 0:00:09.193 *********** 2025-06-02 19:54:30.885942 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'}) 2025-06-02 19:54:30.886205 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'}) 2025-06-02 19:54:30.888781 | orchestrator | 2025-06-02 19:54:30.890183 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 19:54:30.891129 | orchestrator | Monday 02 June 2025 19:54:30 +0000 (0:00:01.989) 0:00:11.182 *********** 2025-06-02 19:54:31.042760 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:31.043675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:31.044214 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:31.046210 | orchestrator | 2025-06-02 19:54:31.046237 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 19:54:31.047178 | orchestrator | Monday 02 June 2025 19:54:31 +0000 (0:00:00.160) 0:00:11.343 *********** 2025-06-02 19:54:32.521033 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'}) 2025-06-02 19:54:32.523890 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'}) 2025-06-02 19:54:32.524247 | orchestrator | 2025-06-02 19:54:32.524841 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 19:54:32.525356 | orchestrator | Monday 02 June 2025 19:54:32 +0000 (0:00:01.476) 0:00:12.820 *********** 2025-06-02 19:54:32.675307 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:32.676327 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:32.677241 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:32.677864 | orchestrator | 2025-06-02 19:54:32.678654 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 19:54:32.679508 | orchestrator | Monday 02 June 2025 19:54:32 +0000 (0:00:00.155) 0:00:12.976 *********** 2025-06-02 19:54:32.818493 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:32.818868 | orchestrator | 2025-06-02 19:54:32.819503 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 19:54:32.820081 | orchestrator | Monday 02 June 2025 19:54:32 +0000 (0:00:00.144) 0:00:13.120 *********** 2025-06-02 19:54:33.164225 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:33.164583 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:33.166489 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:33.167572 | orchestrator | 2025-06-02 19:54:33.168166 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 19:54:33.169165 | orchestrator | Monday 02 June 2025 19:54:33 +0000 (0:00:00.342) 0:00:13.463 *********** 2025-06-02 19:54:33.316521 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:33.317514 | orchestrator | 2025-06-02 19:54:33.318168 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 19:54:33.319025 | orchestrator | Monday 02 June 2025 19:54:33 +0000 (0:00:00.154) 0:00:13.617 *********** 2025-06-02 19:54:33.469958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:33.472469 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:33.472509 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:33.472516 | orchestrator | 2025-06-02 19:54:33.472522 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 19:54:33.472703 | orchestrator | Monday 02 June 2025 19:54:33 +0000 (0:00:00.153) 0:00:13.771 *********** 2025-06-02 19:54:33.613167 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:33.615188 | orchestrator | 2025-06-02 19:54:33.616569 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 19:54:33.616983 | orchestrator | Monday 02 June 2025 19:54:33 +0000 (0:00:00.142) 0:00:13.914 *********** 2025-06-02 19:54:33.769851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:33.770225 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:33.771540 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:33.772460 | orchestrator | 2025-06-02 19:54:33.772965 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 19:54:33.773719 | orchestrator | Monday 02 June 2025 19:54:33 +0000 (0:00:00.156) 0:00:14.070 *********** 2025-06-02 19:54:33.903413 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:54:33.904526 | orchestrator | 2025-06-02 19:54:33.905168 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 19:54:33.906128 | orchestrator | Monday 02 June 2025 19:54:33 +0000 (0:00:00.134) 0:00:14.204 *********** 2025-06-02 19:54:34.062980 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:34.065783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:34.065850 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:34.066123 | orchestrator | 2025-06-02 19:54:34.066709 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 19:54:34.067023 | orchestrator | Monday 02 June 2025 19:54:34 +0000 (0:00:00.157) 0:00:14.362 *********** 2025-06-02 19:54:34.210491 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:34.210865 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:34.212260 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:34.213385 | orchestrator | 2025-06-02 19:54:34.214842 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 19:54:34.215619 | orchestrator | Monday 02 June 2025 19:54:34 +0000 (0:00:00.148) 0:00:14.510 *********** 2025-06-02 19:54:34.358744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:34.358843 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:34.359381 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:34.359859 | orchestrator | 2025-06-02 19:54:34.360575 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 19:54:34.361004 | orchestrator | Monday 02 June 2025 19:54:34 +0000 (0:00:00.146) 0:00:14.657 *********** 2025-06-02 19:54:34.490355 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:34.490913 | orchestrator | 2025-06-02 19:54:34.492016 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 19:54:34.493248 | orchestrator | Monday 02 June 2025 19:54:34 +0000 (0:00:00.134) 0:00:14.791 *********** 2025-06-02 19:54:34.637690 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:34.637958 | orchestrator | 2025-06-02 19:54:34.639197 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 19:54:34.639975 | orchestrator | Monday 02 June 2025 19:54:34 +0000 (0:00:00.147) 0:00:14.939 *********** 2025-06-02 19:54:34.776540 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:34.777695 | orchestrator | 2025-06-02 19:54:34.778536 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 19:54:34.779416 | orchestrator | Monday 02 June 2025 19:54:34 +0000 (0:00:00.138) 0:00:15.078 *********** 2025-06-02 19:54:35.121218 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:54:35.122114 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 19:54:35.123783 | orchestrator | } 2025-06-02 19:54:35.124973 | orchestrator | 2025-06-02 19:54:35.126149 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 19:54:35.127036 | orchestrator | Monday 02 June 2025 19:54:35 +0000 (0:00:00.342) 0:00:15.421 *********** 2025-06-02 19:54:35.277016 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:54:35.277119 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 19:54:35.277673 | orchestrator | } 2025-06-02 19:54:35.278212 | orchestrator | 2025-06-02 19:54:35.279039 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 19:54:35.280515 | orchestrator | Monday 02 June 2025 19:54:35 +0000 (0:00:00.156) 0:00:15.578 *********** 2025-06-02 19:54:35.431169 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:54:35.431257 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 19:54:35.431267 | orchestrator | } 2025-06-02 19:54:35.431649 | orchestrator | 2025-06-02 19:54:35.433154 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 19:54:35.434347 | orchestrator | Monday 02 June 2025 19:54:35 +0000 (0:00:00.150) 0:00:15.728 *********** 2025-06-02 19:54:36.118357 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:54:36.119688 | orchestrator | 2025-06-02 19:54:36.119726 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 19:54:36.120237 | orchestrator | Monday 02 June 2025 19:54:36 +0000 (0:00:00.688) 0:00:16.417 *********** 2025-06-02 19:54:36.628753 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:54:36.628891 | orchestrator | 2025-06-02 19:54:36.629934 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 19:54:36.630782 | orchestrator | Monday 02 June 2025 19:54:36 +0000 (0:00:00.512) 0:00:16.930 *********** 2025-06-02 19:54:37.150737 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:54:37.151491 | orchestrator | 2025-06-02 19:54:37.153225 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 19:54:37.153257 | orchestrator | Monday 02 June 2025 19:54:37 +0000 (0:00:00.520) 0:00:17.450 *********** 2025-06-02 19:54:37.306861 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:54:37.307393 | orchestrator | 2025-06-02 19:54:37.308259 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 19:54:37.309915 | orchestrator | Monday 02 June 2025 19:54:37 +0000 (0:00:00.156) 0:00:17.606 *********** 2025-06-02 19:54:37.426617 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:37.426781 | orchestrator | 2025-06-02 19:54:37.427474 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 19:54:37.428145 | orchestrator | Monday 02 June 2025 19:54:37 +0000 (0:00:00.120) 0:00:17.727 *********** 2025-06-02 19:54:37.538637 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:37.539256 | orchestrator | 2025-06-02 19:54:37.539765 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 19:54:37.540477 | orchestrator | Monday 02 June 2025 19:54:37 +0000 (0:00:00.112) 0:00:17.840 *********** 2025-06-02 19:54:37.680915 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:54:37.681128 | orchestrator |  "vgs_report": { 2025-06-02 19:54:37.682786 | orchestrator |  "vg": [] 2025-06-02 19:54:37.684679 | orchestrator |  } 2025-06-02 19:54:37.685365 | orchestrator | } 2025-06-02 19:54:37.686247 | orchestrator | 2025-06-02 19:54:37.686705 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 19:54:37.687235 | orchestrator | Monday 02 June 2025 19:54:37 +0000 (0:00:00.141) 0:00:17.981 *********** 2025-06-02 19:54:37.816281 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:37.816909 | orchestrator | 2025-06-02 19:54:37.818799 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 19:54:37.821321 | orchestrator | Monday 02 June 2025 19:54:37 +0000 (0:00:00.134) 0:00:18.115 *********** 2025-06-02 19:54:37.956216 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:37.956334 | orchestrator | 2025-06-02 19:54:37.956958 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 19:54:37.958111 | orchestrator | Monday 02 June 2025 19:54:37 +0000 (0:00:00.138) 0:00:18.254 *********** 2025-06-02 19:54:38.288079 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:38.288251 | orchestrator | 2025-06-02 19:54:38.289319 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 19:54:38.290058 | orchestrator | Monday 02 June 2025 19:54:38 +0000 (0:00:00.334) 0:00:18.588 *********** 2025-06-02 19:54:38.418876 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:38.419213 | orchestrator | 2025-06-02 19:54:38.419903 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 19:54:38.420554 | orchestrator | Monday 02 June 2025 19:54:38 +0000 (0:00:00.132) 0:00:18.721 *********** 2025-06-02 19:54:38.556900 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:38.557063 | orchestrator | 2025-06-02 19:54:38.559494 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 19:54:38.560911 | orchestrator | Monday 02 June 2025 19:54:38 +0000 (0:00:00.137) 0:00:18.858 *********** 2025-06-02 19:54:38.686382 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:38.686538 | orchestrator | 2025-06-02 19:54:38.686628 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 19:54:38.687075 | orchestrator | Monday 02 June 2025 19:54:38 +0000 (0:00:00.129) 0:00:18.988 *********** 2025-06-02 19:54:38.813489 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:38.814768 | orchestrator | 2025-06-02 19:54:38.815918 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 19:54:38.816682 | orchestrator | Monday 02 June 2025 19:54:38 +0000 (0:00:00.125) 0:00:19.113 *********** 2025-06-02 19:54:38.940779 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:38.944273 | orchestrator | 2025-06-02 19:54:38.944958 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 19:54:38.945797 | orchestrator | Monday 02 June 2025 19:54:38 +0000 (0:00:00.128) 0:00:19.241 *********** 2025-06-02 19:54:39.064924 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:39.065667 | orchestrator | 2025-06-02 19:54:39.066497 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 19:54:39.067616 | orchestrator | Monday 02 June 2025 19:54:39 +0000 (0:00:00.122) 0:00:19.364 *********** 2025-06-02 19:54:39.198400 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:39.201246 | orchestrator | 2025-06-02 19:54:39.202574 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 19:54:39.203178 | orchestrator | Monday 02 June 2025 19:54:39 +0000 (0:00:00.134) 0:00:19.499 *********** 2025-06-02 19:54:39.327247 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:39.329593 | orchestrator | 2025-06-02 19:54:39.330614 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 19:54:39.332555 | orchestrator | Monday 02 June 2025 19:54:39 +0000 (0:00:00.128) 0:00:19.628 *********** 2025-06-02 19:54:39.457075 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:39.457277 | orchestrator | 2025-06-02 19:54:39.457739 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 19:54:39.458525 | orchestrator | Monday 02 June 2025 19:54:39 +0000 (0:00:00.126) 0:00:19.754 *********** 2025-06-02 19:54:39.598562 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:39.598971 | orchestrator | 2025-06-02 19:54:39.599826 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 19:54:39.600296 | orchestrator | Monday 02 June 2025 19:54:39 +0000 (0:00:00.145) 0:00:19.899 *********** 2025-06-02 19:54:39.730869 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:39.731720 | orchestrator | 2025-06-02 19:54:39.732750 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 19:54:39.733648 | orchestrator | Monday 02 June 2025 19:54:39 +0000 (0:00:00.132) 0:00:20.032 *********** 2025-06-02 19:54:39.891153 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:39.891337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:39.892616 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:39.893382 | orchestrator | 2025-06-02 19:54:39.895239 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 19:54:39.896342 | orchestrator | Monday 02 June 2025 19:54:39 +0000 (0:00:00.160) 0:00:20.192 *********** 2025-06-02 19:54:40.280819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:40.281098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:40.281610 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:40.282322 | orchestrator | 2025-06-02 19:54:40.283656 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 19:54:40.283751 | orchestrator | Monday 02 June 2025 19:54:40 +0000 (0:00:00.388) 0:00:20.581 *********** 2025-06-02 19:54:40.453119 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:40.453718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:40.454718 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:40.456509 | orchestrator | 2025-06-02 19:54:40.457691 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 19:54:40.458152 | orchestrator | Monday 02 June 2025 19:54:40 +0000 (0:00:00.172) 0:00:20.754 *********** 2025-06-02 19:54:40.619534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:40.619684 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:40.620574 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:40.621179 | orchestrator | 2025-06-02 19:54:40.622610 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 19:54:40.623071 | orchestrator | Monday 02 June 2025 19:54:40 +0000 (0:00:00.165) 0:00:20.919 *********** 2025-06-02 19:54:40.776233 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:40.777019 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:40.777914 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:40.780874 | orchestrator | 2025-06-02 19:54:40.781402 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 19:54:40.782521 | orchestrator | Monday 02 June 2025 19:54:40 +0000 (0:00:00.157) 0:00:21.077 *********** 2025-06-02 19:54:40.925247 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:40.925929 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:40.927009 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:40.928412 | orchestrator | 2025-06-02 19:54:40.929382 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 19:54:40.929940 | orchestrator | Monday 02 June 2025 19:54:40 +0000 (0:00:00.148) 0:00:21.225 *********** 2025-06-02 19:54:41.092911 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:41.093496 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:41.094529 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:41.096058 | orchestrator | 2025-06-02 19:54:41.096127 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 19:54:41.097062 | orchestrator | Monday 02 June 2025 19:54:41 +0000 (0:00:00.167) 0:00:21.393 *********** 2025-06-02 19:54:41.247912 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:41.248107 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:41.249753 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:41.251047 | orchestrator | 2025-06-02 19:54:41.252643 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 19:54:41.253778 | orchestrator | Monday 02 June 2025 19:54:41 +0000 (0:00:00.155) 0:00:21.548 *********** 2025-06-02 19:54:41.756164 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:54:41.756575 | orchestrator | 2025-06-02 19:54:41.758223 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 19:54:41.759758 | orchestrator | Monday 02 June 2025 19:54:41 +0000 (0:00:00.508) 0:00:22.057 *********** 2025-06-02 19:54:42.250569 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:54:42.252014 | orchestrator | 2025-06-02 19:54:42.252310 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 19:54:42.252937 | orchestrator | Monday 02 June 2025 19:54:42 +0000 (0:00:00.493) 0:00:22.550 *********** 2025-06-02 19:54:42.382578 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:54:42.383080 | orchestrator | 2025-06-02 19:54:42.384055 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 19:54:42.384550 | orchestrator | Monday 02 June 2025 19:54:42 +0000 (0:00:00.133) 0:00:22.684 *********** 2025-06-02 19:54:42.553176 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'vg_name': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'}) 2025-06-02 19:54:42.553621 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'vg_name': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'}) 2025-06-02 19:54:42.554621 | orchestrator | 2025-06-02 19:54:42.555759 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 19:54:42.557350 | orchestrator | Monday 02 June 2025 19:54:42 +0000 (0:00:00.169) 0:00:22.854 *********** 2025-06-02 19:54:42.702665 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:42.703401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:42.704488 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:42.705121 | orchestrator | 2025-06-02 19:54:42.706974 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 19:54:42.707022 | orchestrator | Monday 02 June 2025 19:54:42 +0000 (0:00:00.149) 0:00:23.004 *********** 2025-06-02 19:54:43.053337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:43.053744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:43.054535 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:43.055370 | orchestrator | 2025-06-02 19:54:43.056794 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 19:54:43.057023 | orchestrator | Monday 02 June 2025 19:54:43 +0000 (0:00:00.350) 0:00:23.354 *********** 2025-06-02 19:54:43.205185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'})  2025-06-02 19:54:43.205284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'})  2025-06-02 19:54:43.206290 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:54:43.207090 | orchestrator | 2025-06-02 19:54:43.207374 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 19:54:43.207926 | orchestrator | Monday 02 June 2025 19:54:43 +0000 (0:00:00.151) 0:00:23.505 *********** 2025-06-02 19:54:43.489528 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 19:54:43.490163 | orchestrator |  "lvm_report": { 2025-06-02 19:54:43.491293 | orchestrator |  "lv": [ 2025-06-02 19:54:43.493205 | orchestrator |  { 2025-06-02 19:54:43.493225 | orchestrator |  "lv_name": "osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5", 2025-06-02 19:54:43.493962 | orchestrator |  "vg_name": "ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5" 2025-06-02 19:54:43.494876 | orchestrator |  }, 2025-06-02 19:54:43.495767 | orchestrator |  { 2025-06-02 19:54:43.497544 | orchestrator |  "lv_name": "osd-block-93e9f309-356a-50f8-bf6b-26db11b00033", 2025-06-02 19:54:43.497719 | orchestrator |  "vg_name": "ceph-93e9f309-356a-50f8-bf6b-26db11b00033" 2025-06-02 19:54:43.498397 | orchestrator |  } 2025-06-02 19:54:43.499030 | orchestrator |  ], 2025-06-02 19:54:43.499567 | orchestrator |  "pv": [ 2025-06-02 19:54:43.500301 | orchestrator |  { 2025-06-02 19:54:43.500647 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 19:54:43.501215 | orchestrator |  "vg_name": "ceph-93e9f309-356a-50f8-bf6b-26db11b00033" 2025-06-02 19:54:43.501781 | orchestrator |  }, 2025-06-02 19:54:43.502870 | orchestrator |  { 2025-06-02 19:54:43.503657 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 19:54:43.505163 | orchestrator |  "vg_name": "ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5" 2025-06-02 19:54:43.506135 | orchestrator |  } 2025-06-02 19:54:43.507330 | orchestrator |  ] 2025-06-02 19:54:43.507798 | orchestrator |  } 2025-06-02 19:54:43.508870 | orchestrator | } 2025-06-02 19:54:43.509760 | orchestrator | 2025-06-02 19:54:43.510770 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 19:54:43.511361 | orchestrator | 2025-06-02 19:54:43.512268 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:54:43.512931 | orchestrator | Monday 02 June 2025 19:54:43 +0000 (0:00:00.284) 0:00:23.790 *********** 2025-06-02 19:54:43.735648 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 19:54:43.735753 | orchestrator | 2025-06-02 19:54:43.736189 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:54:43.736707 | orchestrator | Monday 02 June 2025 19:54:43 +0000 (0:00:00.245) 0:00:24.036 *********** 2025-06-02 19:54:43.957955 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:54:43.958062 | orchestrator | 2025-06-02 19:54:43.959157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:43.961743 | orchestrator | Monday 02 June 2025 19:54:43 +0000 (0:00:00.222) 0:00:24.258 *********** 2025-06-02 19:54:44.367579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 19:54:44.368768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 19:54:44.369470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 19:54:44.370626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 19:54:44.372616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 19:54:44.372889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 19:54:44.373635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 19:54:44.374714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 19:54:44.375787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 19:54:44.376571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 19:54:44.377012 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 19:54:44.377886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 19:54:44.378498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 19:54:44.378826 | orchestrator | 2025-06-02 19:54:44.379539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:44.380000 | orchestrator | Monday 02 June 2025 19:54:44 +0000 (0:00:00.408) 0:00:24.667 *********** 2025-06-02 19:54:44.561007 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:44.561673 | orchestrator | 2025-06-02 19:54:44.562960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:44.564652 | orchestrator | Monday 02 June 2025 19:54:44 +0000 (0:00:00.194) 0:00:24.862 *********** 2025-06-02 19:54:44.760534 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:44.762874 | orchestrator | 2025-06-02 19:54:44.763981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:44.765030 | orchestrator | Monday 02 June 2025 19:54:44 +0000 (0:00:00.199) 0:00:25.061 *********** 2025-06-02 19:54:44.956840 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:44.957685 | orchestrator | 2025-06-02 19:54:44.958723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:44.959539 | orchestrator | Monday 02 June 2025 19:54:44 +0000 (0:00:00.196) 0:00:25.257 *********** 2025-06-02 19:54:45.557262 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:45.558240 | orchestrator | 2025-06-02 19:54:45.560123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:45.560900 | orchestrator | Monday 02 June 2025 19:54:45 +0000 (0:00:00.598) 0:00:25.856 *********** 2025-06-02 19:54:45.757645 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:45.758136 | orchestrator | 2025-06-02 19:54:45.758786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:45.759946 | orchestrator | Monday 02 June 2025 19:54:45 +0000 (0:00:00.202) 0:00:26.058 *********** 2025-06-02 19:54:45.955858 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:45.957101 | orchestrator | 2025-06-02 19:54:45.957334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:45.957856 | orchestrator | Monday 02 June 2025 19:54:45 +0000 (0:00:00.198) 0:00:26.257 *********** 2025-06-02 19:54:46.195042 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:46.195623 | orchestrator | 2025-06-02 19:54:46.196378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:46.196779 | orchestrator | Monday 02 June 2025 19:54:46 +0000 (0:00:00.239) 0:00:26.496 *********** 2025-06-02 19:54:46.381759 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:46.381893 | orchestrator | 2025-06-02 19:54:46.382247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:46.382499 | orchestrator | Monday 02 June 2025 19:54:46 +0000 (0:00:00.186) 0:00:26.683 *********** 2025-06-02 19:54:46.782284 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69) 2025-06-02 19:54:46.782639 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69) 2025-06-02 19:54:46.783993 | orchestrator | 2025-06-02 19:54:46.786857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:46.786892 | orchestrator | Monday 02 June 2025 19:54:46 +0000 (0:00:00.400) 0:00:27.083 *********** 2025-06-02 19:54:47.194110 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee) 2025-06-02 19:54:47.194653 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee) 2025-06-02 19:54:47.195877 | orchestrator | 2025-06-02 19:54:47.198347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:47.199022 | orchestrator | Monday 02 June 2025 19:54:47 +0000 (0:00:00.410) 0:00:27.494 *********** 2025-06-02 19:54:47.624231 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b) 2025-06-02 19:54:47.625184 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b) 2025-06-02 19:54:47.625841 | orchestrator | 2025-06-02 19:54:47.626831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:47.627923 | orchestrator | Monday 02 June 2025 19:54:47 +0000 (0:00:00.431) 0:00:27.925 *********** 2025-06-02 19:54:48.057481 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db) 2025-06-02 19:54:48.057587 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db) 2025-06-02 19:54:48.058596 | orchestrator | 2025-06-02 19:54:48.059504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:54:48.061387 | orchestrator | Monday 02 June 2025 19:54:48 +0000 (0:00:00.432) 0:00:28.358 *********** 2025-06-02 19:54:48.384940 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:54:48.385841 | orchestrator | 2025-06-02 19:54:48.386996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:48.388568 | orchestrator | Monday 02 June 2025 19:54:48 +0000 (0:00:00.326) 0:00:28.684 *********** 2025-06-02 19:54:48.975255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 19:54:48.976395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 19:54:48.978193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 19:54:48.979733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 19:54:48.980134 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 19:54:48.982556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 19:54:48.982583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 19:54:48.982595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 19:54:48.982606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 19:54:48.982617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 19:54:48.982629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 19:54:48.982682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 19:54:48.982914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 19:54:48.983317 | orchestrator | 2025-06-02 19:54:48.983573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:48.983857 | orchestrator | Monday 02 June 2025 19:54:48 +0000 (0:00:00.591) 0:00:29.276 *********** 2025-06-02 19:54:49.180817 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:49.181662 | orchestrator | 2025-06-02 19:54:49.182571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:49.183303 | orchestrator | Monday 02 June 2025 19:54:49 +0000 (0:00:00.205) 0:00:29.481 *********** 2025-06-02 19:54:49.392743 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:49.392926 | orchestrator | 2025-06-02 19:54:49.393918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:49.395983 | orchestrator | Monday 02 June 2025 19:54:49 +0000 (0:00:00.210) 0:00:29.692 *********** 2025-06-02 19:54:49.575085 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:49.575512 | orchestrator | 2025-06-02 19:54:49.576250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:49.577004 | orchestrator | Monday 02 June 2025 19:54:49 +0000 (0:00:00.184) 0:00:29.876 *********** 2025-06-02 19:54:49.754322 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:49.755025 | orchestrator | 2025-06-02 19:54:49.756009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:49.757025 | orchestrator | Monday 02 June 2025 19:54:49 +0000 (0:00:00.178) 0:00:30.055 *********** 2025-06-02 19:54:49.928040 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:49.928677 | orchestrator | 2025-06-02 19:54:49.929701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:49.930231 | orchestrator | Monday 02 June 2025 19:54:49 +0000 (0:00:00.175) 0:00:30.230 *********** 2025-06-02 19:54:50.118171 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:50.118384 | orchestrator | 2025-06-02 19:54:50.118826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:50.119365 | orchestrator | Monday 02 June 2025 19:54:50 +0000 (0:00:00.189) 0:00:30.420 *********** 2025-06-02 19:54:50.294677 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:50.295397 | orchestrator | 2025-06-02 19:54:50.296767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:50.296928 | orchestrator | Monday 02 June 2025 19:54:50 +0000 (0:00:00.174) 0:00:30.595 *********** 2025-06-02 19:54:50.467779 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:50.468195 | orchestrator | 2025-06-02 19:54:50.468685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:50.469140 | orchestrator | Monday 02 June 2025 19:54:50 +0000 (0:00:00.175) 0:00:30.770 *********** 2025-06-02 19:54:51.185505 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 19:54:51.186201 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 19:54:51.188487 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 19:54:51.189758 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 19:54:51.190394 | orchestrator | 2025-06-02 19:54:51.191020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:51.191884 | orchestrator | Monday 02 June 2025 19:54:51 +0000 (0:00:00.716) 0:00:31.487 *********** 2025-06-02 19:54:51.415628 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:51.415732 | orchestrator | 2025-06-02 19:54:51.416551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:51.416809 | orchestrator | Monday 02 June 2025 19:54:51 +0000 (0:00:00.227) 0:00:31.714 *********** 2025-06-02 19:54:51.597963 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:51.598669 | orchestrator | 2025-06-02 19:54:51.599908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:51.600711 | orchestrator | Monday 02 June 2025 19:54:51 +0000 (0:00:00.185) 0:00:31.899 *********** 2025-06-02 19:54:52.071609 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:52.072581 | orchestrator | 2025-06-02 19:54:52.073172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:54:52.074538 | orchestrator | Monday 02 June 2025 19:54:52 +0000 (0:00:00.472) 0:00:32.372 *********** 2025-06-02 19:54:52.236987 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:52.237928 | orchestrator | 2025-06-02 19:54:52.238826 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 19:54:52.240201 | orchestrator | Monday 02 June 2025 19:54:52 +0000 (0:00:00.166) 0:00:32.539 *********** 2025-06-02 19:54:52.361532 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:52.361832 | orchestrator | 2025-06-02 19:54:52.363513 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 19:54:52.364736 | orchestrator | Monday 02 June 2025 19:54:52 +0000 (0:00:00.122) 0:00:32.662 *********** 2025-06-02 19:54:52.522178 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bdb59653-b88e-5628-a878-3ed7677d43f1'}}) 2025-06-02 19:54:52.523289 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ee20b18c-4531-5b6f-acaf-50beaceb257d'}}) 2025-06-02 19:54:52.524187 | orchestrator | 2025-06-02 19:54:52.525012 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 19:54:52.525849 | orchestrator | Monday 02 June 2025 19:54:52 +0000 (0:00:00.161) 0:00:32.824 *********** 2025-06-02 19:54:54.385727 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'}) 2025-06-02 19:54:54.385937 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'}) 2025-06-02 19:54:54.386154 | orchestrator | 2025-06-02 19:54:54.386849 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 19:54:54.387362 | orchestrator | Monday 02 June 2025 19:54:54 +0000 (0:00:01.861) 0:00:34.685 *********** 2025-06-02 19:54:54.524042 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:54:54.524393 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:54:54.525323 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:54.525712 | orchestrator | 2025-06-02 19:54:54.526491 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 19:54:54.527229 | orchestrator | Monday 02 June 2025 19:54:54 +0000 (0:00:00.141) 0:00:34.826 *********** 2025-06-02 19:54:55.754124 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'}) 2025-06-02 19:54:55.754259 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'}) 2025-06-02 19:54:55.754321 | orchestrator | 2025-06-02 19:54:55.755582 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 19:54:55.756981 | orchestrator | Monday 02 June 2025 19:54:55 +0000 (0:00:01.228) 0:00:36.055 *********** 2025-06-02 19:54:55.884770 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:54:55.885140 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:54:55.886888 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:55.887401 | orchestrator | 2025-06-02 19:54:55.888143 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 19:54:55.888661 | orchestrator | Monday 02 June 2025 19:54:55 +0000 (0:00:00.130) 0:00:36.186 *********** 2025-06-02 19:54:56.000827 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:56.001021 | orchestrator | 2025-06-02 19:54:56.001135 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 19:54:56.002114 | orchestrator | Monday 02 June 2025 19:54:55 +0000 (0:00:00.117) 0:00:36.303 *********** 2025-06-02 19:54:56.134549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:54:56.134745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:54:56.136060 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:56.136808 | orchestrator | 2025-06-02 19:54:56.137576 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 19:54:56.138167 | orchestrator | Monday 02 June 2025 19:54:56 +0000 (0:00:00.133) 0:00:36.436 *********** 2025-06-02 19:54:56.265772 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:56.266821 | orchestrator | 2025-06-02 19:54:56.266879 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 19:54:56.267324 | orchestrator | Monday 02 June 2025 19:54:56 +0000 (0:00:00.131) 0:00:36.568 *********** 2025-06-02 19:54:56.391571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:54:56.391990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:54:56.392803 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:56.393633 | orchestrator | 2025-06-02 19:54:56.394993 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 19:54:56.396008 | orchestrator | Monday 02 June 2025 19:54:56 +0000 (0:00:00.124) 0:00:36.693 *********** 2025-06-02 19:54:56.647739 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:56.648971 | orchestrator | 2025-06-02 19:54:56.650852 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 19:54:56.651725 | orchestrator | Monday 02 June 2025 19:54:56 +0000 (0:00:00.255) 0:00:36.948 *********** 2025-06-02 19:54:56.781048 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:54:56.781319 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:54:56.782995 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:56.783022 | orchestrator | 2025-06-02 19:54:56.783309 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 19:54:56.783655 | orchestrator | Monday 02 June 2025 19:54:56 +0000 (0:00:00.133) 0:00:37.082 *********** 2025-06-02 19:54:56.903929 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:54:56.904455 | orchestrator | 2025-06-02 19:54:56.905353 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 19:54:56.906072 | orchestrator | Monday 02 June 2025 19:54:56 +0000 (0:00:00.123) 0:00:37.206 *********** 2025-06-02 19:54:57.040056 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:54:57.040824 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:54:57.042342 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:57.042356 | orchestrator | 2025-06-02 19:54:57.042901 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 19:54:57.043652 | orchestrator | Monday 02 June 2025 19:54:57 +0000 (0:00:00.135) 0:00:37.342 *********** 2025-06-02 19:54:57.179480 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:54:57.179819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:54:57.180828 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:57.181561 | orchestrator | 2025-06-02 19:54:57.182268 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 19:54:57.183143 | orchestrator | Monday 02 June 2025 19:54:57 +0000 (0:00:00.139) 0:00:37.481 *********** 2025-06-02 19:54:57.324399 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:54:57.324645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:54:57.325135 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:57.325875 | orchestrator | 2025-06-02 19:54:57.326575 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 19:54:57.327188 | orchestrator | Monday 02 June 2025 19:54:57 +0000 (0:00:00.144) 0:00:37.625 *********** 2025-06-02 19:54:57.439402 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:57.439695 | orchestrator | 2025-06-02 19:54:57.440501 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 19:54:57.441024 | orchestrator | Monday 02 June 2025 19:54:57 +0000 (0:00:00.115) 0:00:37.741 *********** 2025-06-02 19:54:57.559589 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:57.560471 | orchestrator | 2025-06-02 19:54:57.560669 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 19:54:57.561650 | orchestrator | Monday 02 June 2025 19:54:57 +0000 (0:00:00.119) 0:00:37.860 *********** 2025-06-02 19:54:57.672041 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:57.672722 | orchestrator | 2025-06-02 19:54:57.673251 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 19:54:57.674146 | orchestrator | Monday 02 June 2025 19:54:57 +0000 (0:00:00.113) 0:00:37.974 *********** 2025-06-02 19:54:57.795034 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:54:57.795702 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 19:54:57.796906 | orchestrator | } 2025-06-02 19:54:57.797554 | orchestrator | 2025-06-02 19:54:57.797946 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 19:54:57.798595 | orchestrator | Monday 02 June 2025 19:54:57 +0000 (0:00:00.123) 0:00:38.097 *********** 2025-06-02 19:54:57.924396 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:54:57.924721 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 19:54:57.924805 | orchestrator | } 2025-06-02 19:54:57.924889 | orchestrator | 2025-06-02 19:54:57.926483 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 19:54:57.926517 | orchestrator | Monday 02 June 2025 19:54:57 +0000 (0:00:00.129) 0:00:38.227 *********** 2025-06-02 19:54:58.076359 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:54:58.077346 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 19:54:58.078247 | orchestrator | } 2025-06-02 19:54:58.078733 | orchestrator | 2025-06-02 19:54:58.079484 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 19:54:58.079981 | orchestrator | Monday 02 June 2025 19:54:58 +0000 (0:00:00.151) 0:00:38.378 *********** 2025-06-02 19:54:58.682206 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:54:58.682382 | orchestrator | 2025-06-02 19:54:58.682860 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 19:54:58.683472 | orchestrator | Monday 02 June 2025 19:54:58 +0000 (0:00:00.603) 0:00:38.981 *********** 2025-06-02 19:54:59.200707 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:54:59.201463 | orchestrator | 2025-06-02 19:54:59.201562 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 19:54:59.202536 | orchestrator | Monday 02 June 2025 19:54:59 +0000 (0:00:00.520) 0:00:39.502 *********** 2025-06-02 19:54:59.683859 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:54:59.684702 | orchestrator | 2025-06-02 19:54:59.685240 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 19:54:59.686252 | orchestrator | Monday 02 June 2025 19:54:59 +0000 (0:00:00.482) 0:00:39.984 *********** 2025-06-02 19:54:59.823699 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:54:59.823818 | orchestrator | 2025-06-02 19:54:59.823844 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 19:54:59.823865 | orchestrator | Monday 02 June 2025 19:54:59 +0000 (0:00:00.137) 0:00:40.122 *********** 2025-06-02 19:54:59.922287 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:54:59.923281 | orchestrator | 2025-06-02 19:54:59.923698 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 19:54:59.924504 | orchestrator | Monday 02 June 2025 19:54:59 +0000 (0:00:00.102) 0:00:40.224 *********** 2025-06-02 19:55:00.025540 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:00.027560 | orchestrator | 2025-06-02 19:55:00.027610 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 19:55:00.027624 | orchestrator | Monday 02 June 2025 19:55:00 +0000 (0:00:00.103) 0:00:40.327 *********** 2025-06-02 19:55:00.149974 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:55:00.151024 | orchestrator |  "vgs_report": { 2025-06-02 19:55:00.152613 | orchestrator |  "vg": [] 2025-06-02 19:55:00.154233 | orchestrator |  } 2025-06-02 19:55:00.154326 | orchestrator | } 2025-06-02 19:55:00.154948 | orchestrator | 2025-06-02 19:55:00.155522 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 19:55:00.155948 | orchestrator | Monday 02 June 2025 19:55:00 +0000 (0:00:00.124) 0:00:40.452 *********** 2025-06-02 19:55:00.270731 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:00.270915 | orchestrator | 2025-06-02 19:55:00.271545 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 19:55:00.272612 | orchestrator | Monday 02 June 2025 19:55:00 +0000 (0:00:00.120) 0:00:40.572 *********** 2025-06-02 19:55:00.394280 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:00.394964 | orchestrator | 2025-06-02 19:55:00.395805 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 19:55:00.396527 | orchestrator | Monday 02 June 2025 19:55:00 +0000 (0:00:00.123) 0:00:40.696 *********** 2025-06-02 19:55:00.509766 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:00.510372 | orchestrator | 2025-06-02 19:55:00.511019 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 19:55:00.511853 | orchestrator | Monday 02 June 2025 19:55:00 +0000 (0:00:00.115) 0:00:40.812 *********** 2025-06-02 19:55:00.638399 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:00.639137 | orchestrator | 2025-06-02 19:55:00.639830 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 19:55:00.640827 | orchestrator | Monday 02 June 2025 19:55:00 +0000 (0:00:00.126) 0:00:40.939 *********** 2025-06-02 19:55:00.765484 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:00.765941 | orchestrator | 2025-06-02 19:55:00.766907 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 19:55:00.766942 | orchestrator | Monday 02 June 2025 19:55:00 +0000 (0:00:00.127) 0:00:41.066 *********** 2025-06-02 19:55:01.026132 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:01.026577 | orchestrator | 2025-06-02 19:55:01.027272 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 19:55:01.028077 | orchestrator | Monday 02 June 2025 19:55:01 +0000 (0:00:00.261) 0:00:41.328 *********** 2025-06-02 19:55:01.143540 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:01.144071 | orchestrator | 2025-06-02 19:55:01.145300 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 19:55:01.145333 | orchestrator | Monday 02 June 2025 19:55:01 +0000 (0:00:00.117) 0:00:41.445 *********** 2025-06-02 19:55:01.271566 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:01.272472 | orchestrator | 2025-06-02 19:55:01.272629 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 19:55:01.273233 | orchestrator | Monday 02 June 2025 19:55:01 +0000 (0:00:00.127) 0:00:41.573 *********** 2025-06-02 19:55:01.395271 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:01.395495 | orchestrator | 2025-06-02 19:55:01.396191 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 19:55:01.396768 | orchestrator | Monday 02 June 2025 19:55:01 +0000 (0:00:00.122) 0:00:41.696 *********** 2025-06-02 19:55:01.518496 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:01.519912 | orchestrator | 2025-06-02 19:55:01.520350 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 19:55:01.522276 | orchestrator | Monday 02 June 2025 19:55:01 +0000 (0:00:00.123) 0:00:41.819 *********** 2025-06-02 19:55:01.632002 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:01.633183 | orchestrator | 2025-06-02 19:55:01.633862 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 19:55:01.634968 | orchestrator | Monday 02 June 2025 19:55:01 +0000 (0:00:00.112) 0:00:41.932 *********** 2025-06-02 19:55:01.738136 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:01.739552 | orchestrator | 2025-06-02 19:55:01.740550 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 19:55:01.741358 | orchestrator | Monday 02 June 2025 19:55:01 +0000 (0:00:00.108) 0:00:42.040 *********** 2025-06-02 19:55:01.851585 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:01.852058 | orchestrator | 2025-06-02 19:55:01.853150 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 19:55:01.853933 | orchestrator | Monday 02 June 2025 19:55:01 +0000 (0:00:00.112) 0:00:42.153 *********** 2025-06-02 19:55:01.971776 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:01.972359 | orchestrator | 2025-06-02 19:55:01.973269 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 19:55:01.973796 | orchestrator | Monday 02 June 2025 19:55:01 +0000 (0:00:00.117) 0:00:42.271 *********** 2025-06-02 19:55:02.110489 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:55:02.111400 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:55:02.112154 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:02.113278 | orchestrator | 2025-06-02 19:55:02.114357 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 19:55:02.115523 | orchestrator | Monday 02 June 2025 19:55:02 +0000 (0:00:00.141) 0:00:42.412 *********** 2025-06-02 19:55:02.238476 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:55:02.243701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:55:02.243753 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:02.243766 | orchestrator | 2025-06-02 19:55:02.243778 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 19:55:02.243791 | orchestrator | Monday 02 June 2025 19:55:02 +0000 (0:00:00.126) 0:00:42.538 *********** 2025-06-02 19:55:02.380203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:55:02.380306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:55:02.381286 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:02.381678 | orchestrator | 2025-06-02 19:55:02.382557 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 19:55:02.383524 | orchestrator | Monday 02 June 2025 19:55:02 +0000 (0:00:00.142) 0:00:42.681 *********** 2025-06-02 19:55:02.678765 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:55:02.680376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:55:02.681483 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:02.683159 | orchestrator | 2025-06-02 19:55:02.684626 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 19:55:02.685805 | orchestrator | Monday 02 June 2025 19:55:02 +0000 (0:00:00.294) 0:00:42.976 *********** 2025-06-02 19:55:02.826831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:55:02.831686 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:55:02.831779 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:02.831795 | orchestrator | 2025-06-02 19:55:02.831885 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 19:55:02.832126 | orchestrator | Monday 02 June 2025 19:55:02 +0000 (0:00:00.150) 0:00:43.126 *********** 2025-06-02 19:55:02.956936 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:55:02.957586 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:55:02.959010 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:02.959054 | orchestrator | 2025-06-02 19:55:02.959664 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 19:55:02.960544 | orchestrator | Monday 02 June 2025 19:55:02 +0000 (0:00:00.132) 0:00:43.259 *********** 2025-06-02 19:55:03.098889 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:55:03.099085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:55:03.100555 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:03.100581 | orchestrator | 2025-06-02 19:55:03.101102 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 19:55:03.101732 | orchestrator | Monday 02 June 2025 19:55:03 +0000 (0:00:00.141) 0:00:43.400 *********** 2025-06-02 19:55:03.237920 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:55:03.238186 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:55:03.238661 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:03.239252 | orchestrator | 2025-06-02 19:55:03.239690 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 19:55:03.241152 | orchestrator | Monday 02 June 2025 19:55:03 +0000 (0:00:00.140) 0:00:43.540 *********** 2025-06-02 19:55:03.708528 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:55:03.709705 | orchestrator | 2025-06-02 19:55:03.710657 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 19:55:03.711264 | orchestrator | Monday 02 June 2025 19:55:03 +0000 (0:00:00.468) 0:00:44.009 *********** 2025-06-02 19:55:04.206340 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:55:04.206602 | orchestrator | 2025-06-02 19:55:04.207539 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 19:55:04.208347 | orchestrator | Monday 02 June 2025 19:55:04 +0000 (0:00:00.498) 0:00:44.507 *********** 2025-06-02 19:55:04.333498 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:55:04.334875 | orchestrator | 2025-06-02 19:55:04.335526 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 19:55:04.336384 | orchestrator | Monday 02 June 2025 19:55:04 +0000 (0:00:00.127) 0:00:44.635 *********** 2025-06-02 19:55:04.485022 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'vg_name': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'}) 2025-06-02 19:55:04.485318 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'vg_name': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'}) 2025-06-02 19:55:04.486221 | orchestrator | 2025-06-02 19:55:04.486946 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 19:55:04.487681 | orchestrator | Monday 02 June 2025 19:55:04 +0000 (0:00:00.151) 0:00:44.787 *********** 2025-06-02 19:55:04.633703 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:55:04.634760 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:55:04.636045 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:04.636740 | orchestrator | 2025-06-02 19:55:04.640085 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 19:55:04.640652 | orchestrator | Monday 02 June 2025 19:55:04 +0000 (0:00:00.148) 0:00:44.935 *********** 2025-06-02 19:55:04.760408 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:55:04.760647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:55:04.761841 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:04.762221 | orchestrator | 2025-06-02 19:55:04.763221 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 19:55:04.764021 | orchestrator | Monday 02 June 2025 19:55:04 +0000 (0:00:00.126) 0:00:45.061 *********** 2025-06-02 19:55:04.887532 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'})  2025-06-02 19:55:04.889097 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'})  2025-06-02 19:55:04.889955 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:04.890781 | orchestrator | 2025-06-02 19:55:04.891600 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 19:55:04.892555 | orchestrator | Monday 02 June 2025 19:55:04 +0000 (0:00:00.128) 0:00:45.189 *********** 2025-06-02 19:55:05.270150 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 19:55:05.270741 | orchestrator |  "lvm_report": { 2025-06-02 19:55:05.271296 | orchestrator |  "lv": [ 2025-06-02 19:55:05.272775 | orchestrator |  { 2025-06-02 19:55:05.273193 | orchestrator |  "lv_name": "osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1", 2025-06-02 19:55:05.274689 | orchestrator |  "vg_name": "ceph-bdb59653-b88e-5628-a878-3ed7677d43f1" 2025-06-02 19:55:05.275641 | orchestrator |  }, 2025-06-02 19:55:05.276140 | orchestrator |  { 2025-06-02 19:55:05.276765 | orchestrator |  "lv_name": "osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d", 2025-06-02 19:55:05.277303 | orchestrator |  "vg_name": "ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d" 2025-06-02 19:55:05.278401 | orchestrator |  } 2025-06-02 19:55:05.278658 | orchestrator |  ], 2025-06-02 19:55:05.279326 | orchestrator |  "pv": [ 2025-06-02 19:55:05.279945 | orchestrator |  { 2025-06-02 19:55:05.280705 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 19:55:05.281187 | orchestrator |  "vg_name": "ceph-bdb59653-b88e-5628-a878-3ed7677d43f1" 2025-06-02 19:55:05.281909 | orchestrator |  }, 2025-06-02 19:55:05.282517 | orchestrator |  { 2025-06-02 19:55:05.283137 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 19:55:05.283666 | orchestrator |  "vg_name": "ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d" 2025-06-02 19:55:05.284176 | orchestrator |  } 2025-06-02 19:55:05.284815 | orchestrator |  ] 2025-06-02 19:55:05.285171 | orchestrator |  } 2025-06-02 19:55:05.286102 | orchestrator | } 2025-06-02 19:55:05.286318 | orchestrator | 2025-06-02 19:55:05.286899 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 19:55:05.287382 | orchestrator | 2025-06-02 19:55:05.287901 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 19:55:05.288256 | orchestrator | Monday 02 June 2025 19:55:05 +0000 (0:00:00.382) 0:00:45.572 *********** 2025-06-02 19:55:05.484298 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 19:55:05.484657 | orchestrator | 2025-06-02 19:55:05.486072 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 19:55:05.486620 | orchestrator | Monday 02 June 2025 19:55:05 +0000 (0:00:00.213) 0:00:45.786 *********** 2025-06-02 19:55:05.672813 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:05.674093 | orchestrator | 2025-06-02 19:55:05.674894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:05.676030 | orchestrator | Monday 02 June 2025 19:55:05 +0000 (0:00:00.188) 0:00:45.974 *********** 2025-06-02 19:55:06.043387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 19:55:06.045125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 19:55:06.045625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 19:55:06.046304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 19:55:06.047133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 19:55:06.047760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 19:55:06.048205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 19:55:06.049180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 19:55:06.049672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 19:55:06.050683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 19:55:06.051134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 19:55:06.051717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 19:55:06.052093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 19:55:06.052451 | orchestrator | 2025-06-02 19:55:06.052793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:06.053180 | orchestrator | Monday 02 June 2025 19:55:06 +0000 (0:00:00.370) 0:00:46.345 *********** 2025-06-02 19:55:06.234691 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:06.234869 | orchestrator | 2025-06-02 19:55:06.235481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:06.235595 | orchestrator | Monday 02 June 2025 19:55:06 +0000 (0:00:00.191) 0:00:46.537 *********** 2025-06-02 19:55:06.401145 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:06.401626 | orchestrator | 2025-06-02 19:55:06.403419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:06.403990 | orchestrator | Monday 02 June 2025 19:55:06 +0000 (0:00:00.166) 0:00:46.703 *********** 2025-06-02 19:55:06.579098 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:06.579409 | orchestrator | 2025-06-02 19:55:06.580366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:06.580987 | orchestrator | Monday 02 June 2025 19:55:06 +0000 (0:00:00.177) 0:00:46.880 *********** 2025-06-02 19:55:06.758165 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:06.758725 | orchestrator | 2025-06-02 19:55:06.758927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:06.760003 | orchestrator | Monday 02 June 2025 19:55:06 +0000 (0:00:00.178) 0:00:47.059 *********** 2025-06-02 19:55:06.935342 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:06.936119 | orchestrator | 2025-06-02 19:55:06.937718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:06.938257 | orchestrator | Monday 02 June 2025 19:55:06 +0000 (0:00:00.177) 0:00:47.237 *********** 2025-06-02 19:55:07.355113 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:07.355882 | orchestrator | 2025-06-02 19:55:07.357008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:07.357982 | orchestrator | Monday 02 June 2025 19:55:07 +0000 (0:00:00.418) 0:00:47.656 *********** 2025-06-02 19:55:07.529327 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:07.530247 | orchestrator | 2025-06-02 19:55:07.531279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:07.532184 | orchestrator | Monday 02 June 2025 19:55:07 +0000 (0:00:00.175) 0:00:47.831 *********** 2025-06-02 19:55:07.698267 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:07.698540 | orchestrator | 2025-06-02 19:55:07.699455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:07.700267 | orchestrator | Monday 02 June 2025 19:55:07 +0000 (0:00:00.167) 0:00:47.998 *********** 2025-06-02 19:55:08.048901 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25) 2025-06-02 19:55:08.049366 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25) 2025-06-02 19:55:08.050407 | orchestrator | 2025-06-02 19:55:08.051198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:08.051558 | orchestrator | Monday 02 June 2025 19:55:08 +0000 (0:00:00.352) 0:00:48.351 *********** 2025-06-02 19:55:08.397151 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b) 2025-06-02 19:55:08.397260 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b) 2025-06-02 19:55:08.397625 | orchestrator | 2025-06-02 19:55:08.397925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:08.398756 | orchestrator | Monday 02 June 2025 19:55:08 +0000 (0:00:00.346) 0:00:48.697 *********** 2025-06-02 19:55:08.775994 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8) 2025-06-02 19:55:08.776111 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8) 2025-06-02 19:55:08.776190 | orchestrator | 2025-06-02 19:55:08.776663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:08.776908 | orchestrator | Monday 02 June 2025 19:55:08 +0000 (0:00:00.380) 0:00:49.078 *********** 2025-06-02 19:55:09.165245 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb) 2025-06-02 19:55:09.165349 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb) 2025-06-02 19:55:09.165402 | orchestrator | 2025-06-02 19:55:09.165538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 19:55:09.165836 | orchestrator | Monday 02 June 2025 19:55:09 +0000 (0:00:00.389) 0:00:49.467 *********** 2025-06-02 19:55:09.476772 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 19:55:09.477634 | orchestrator | 2025-06-02 19:55:09.478143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:09.479179 | orchestrator | Monday 02 June 2025 19:55:09 +0000 (0:00:00.310) 0:00:49.778 *********** 2025-06-02 19:55:09.892576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 19:55:09.893360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 19:55:09.895655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 19:55:09.896800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 19:55:09.897282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 19:55:09.898188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 19:55:09.899108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 19:55:09.900502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 19:55:09.901147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 19:55:09.901684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 19:55:09.903234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 19:55:09.904183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 19:55:09.904665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 19:55:09.906100 | orchestrator | 2025-06-02 19:55:09.906386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:09.907164 | orchestrator | Monday 02 June 2025 19:55:09 +0000 (0:00:00.414) 0:00:50.192 *********** 2025-06-02 19:55:10.080413 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:10.081568 | orchestrator | 2025-06-02 19:55:10.081673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:10.083160 | orchestrator | Monday 02 June 2025 19:55:10 +0000 (0:00:00.188) 0:00:50.381 *********** 2025-06-02 19:55:10.279918 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:10.280702 | orchestrator | 2025-06-02 19:55:10.281953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:10.282235 | orchestrator | Monday 02 June 2025 19:55:10 +0000 (0:00:00.200) 0:00:50.581 *********** 2025-06-02 19:55:10.875395 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:10.877041 | orchestrator | 2025-06-02 19:55:10.877737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:10.879012 | orchestrator | Monday 02 June 2025 19:55:10 +0000 (0:00:00.594) 0:00:51.175 *********** 2025-06-02 19:55:11.079066 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:11.080151 | orchestrator | 2025-06-02 19:55:11.081646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:11.082401 | orchestrator | Monday 02 June 2025 19:55:11 +0000 (0:00:00.205) 0:00:51.381 *********** 2025-06-02 19:55:11.292198 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:11.292301 | orchestrator | 2025-06-02 19:55:11.292867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:11.293655 | orchestrator | Monday 02 June 2025 19:55:11 +0000 (0:00:00.211) 0:00:51.593 *********** 2025-06-02 19:55:11.489023 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:11.489911 | orchestrator | 2025-06-02 19:55:11.490577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:11.491575 | orchestrator | Monday 02 June 2025 19:55:11 +0000 (0:00:00.198) 0:00:51.791 *********** 2025-06-02 19:55:11.683852 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:11.684036 | orchestrator | 2025-06-02 19:55:11.685305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:11.686288 | orchestrator | Monday 02 June 2025 19:55:11 +0000 (0:00:00.193) 0:00:51.984 *********** 2025-06-02 19:55:11.910906 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:11.911019 | orchestrator | 2025-06-02 19:55:11.911792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:11.912764 | orchestrator | Monday 02 June 2025 19:55:11 +0000 (0:00:00.225) 0:00:52.210 *********** 2025-06-02 19:55:12.619979 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 19:55:12.621658 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 19:55:12.622684 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 19:55:12.623358 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 19:55:12.623949 | orchestrator | 2025-06-02 19:55:12.625235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:12.625299 | orchestrator | Monday 02 June 2025 19:55:12 +0000 (0:00:00.711) 0:00:52.922 *********** 2025-06-02 19:55:12.821271 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:12.821911 | orchestrator | 2025-06-02 19:55:12.822590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:12.823547 | orchestrator | Monday 02 June 2025 19:55:12 +0000 (0:00:00.199) 0:00:53.121 *********** 2025-06-02 19:55:13.031022 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:13.031311 | orchestrator | 2025-06-02 19:55:13.031962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:13.032500 | orchestrator | Monday 02 June 2025 19:55:13 +0000 (0:00:00.208) 0:00:53.330 *********** 2025-06-02 19:55:13.220009 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:13.220413 | orchestrator | 2025-06-02 19:55:13.222090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 19:55:13.223371 | orchestrator | Monday 02 June 2025 19:55:13 +0000 (0:00:00.191) 0:00:53.521 *********** 2025-06-02 19:55:13.414111 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:13.414492 | orchestrator | 2025-06-02 19:55:13.414600 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 19:55:13.415512 | orchestrator | Monday 02 June 2025 19:55:13 +0000 (0:00:00.193) 0:00:53.715 *********** 2025-06-02 19:55:13.777986 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:13.778308 | orchestrator | 2025-06-02 19:55:13.779009 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 19:55:13.779698 | orchestrator | Monday 02 June 2025 19:55:13 +0000 (0:00:00.364) 0:00:54.079 *********** 2025-06-02 19:55:13.976719 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '86208513-8fbd-535b-80fd-915c228be133'}}) 2025-06-02 19:55:13.977177 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed769c7c-5756-52eb-9583-a607cefce370'}}) 2025-06-02 19:55:13.978102 | orchestrator | 2025-06-02 19:55:13.978655 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 19:55:13.979401 | orchestrator | Monday 02 June 2025 19:55:13 +0000 (0:00:00.197) 0:00:54.276 *********** 2025-06-02 19:55:15.791785 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'}) 2025-06-02 19:55:15.793342 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'}) 2025-06-02 19:55:15.793978 | orchestrator | 2025-06-02 19:55:15.795184 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 19:55:15.795734 | orchestrator | Monday 02 June 2025 19:55:15 +0000 (0:00:01.813) 0:00:56.090 *********** 2025-06-02 19:55:15.956556 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:15.956740 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:15.957898 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:15.958489 | orchestrator | 2025-06-02 19:55:15.960025 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 19:55:15.960054 | orchestrator | Monday 02 June 2025 19:55:15 +0000 (0:00:00.166) 0:00:56.257 *********** 2025-06-02 19:55:17.286115 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'}) 2025-06-02 19:55:17.286205 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'}) 2025-06-02 19:55:17.286218 | orchestrator | 2025-06-02 19:55:17.286852 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 19:55:17.287423 | orchestrator | Monday 02 June 2025 19:55:17 +0000 (0:00:01.328) 0:00:57.585 *********** 2025-06-02 19:55:17.435032 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:17.435271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:17.436189 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:17.437069 | orchestrator | 2025-06-02 19:55:17.437941 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 19:55:17.438851 | orchestrator | Monday 02 June 2025 19:55:17 +0000 (0:00:00.150) 0:00:57.736 *********** 2025-06-02 19:55:17.575288 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:17.576052 | orchestrator | 2025-06-02 19:55:17.576834 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 19:55:17.577849 | orchestrator | Monday 02 June 2025 19:55:17 +0000 (0:00:00.139) 0:00:57.876 *********** 2025-06-02 19:55:17.730266 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:17.731520 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:17.732573 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:17.733282 | orchestrator | 2025-06-02 19:55:17.734307 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 19:55:17.735214 | orchestrator | Monday 02 June 2025 19:55:17 +0000 (0:00:00.153) 0:00:58.029 *********** 2025-06-02 19:55:17.862345 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:17.863119 | orchestrator | 2025-06-02 19:55:17.863944 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 19:55:17.865312 | orchestrator | Monday 02 June 2025 19:55:17 +0000 (0:00:00.132) 0:00:58.162 *********** 2025-06-02 19:55:18.009151 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:18.010311 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:18.011261 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:18.012632 | orchestrator | 2025-06-02 19:55:18.013240 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 19:55:18.014072 | orchestrator | Monday 02 June 2025 19:55:17 +0000 (0:00:00.146) 0:00:58.309 *********** 2025-06-02 19:55:18.144516 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:18.144967 | orchestrator | 2025-06-02 19:55:18.145628 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 19:55:18.146409 | orchestrator | Monday 02 June 2025 19:55:18 +0000 (0:00:00.136) 0:00:58.445 *********** 2025-06-02 19:55:18.300417 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:18.300914 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:18.301610 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:18.302215 | orchestrator | 2025-06-02 19:55:18.302618 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 19:55:18.302909 | orchestrator | Monday 02 June 2025 19:55:18 +0000 (0:00:00.155) 0:00:58.601 *********** 2025-06-02 19:55:18.444232 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:18.445063 | orchestrator | 2025-06-02 19:55:18.445929 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 19:55:18.447004 | orchestrator | Monday 02 June 2025 19:55:18 +0000 (0:00:00.144) 0:00:58.745 *********** 2025-06-02 19:55:18.815087 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:18.815584 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:18.816495 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:18.817798 | orchestrator | 2025-06-02 19:55:18.819660 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 19:55:18.820404 | orchestrator | Monday 02 June 2025 19:55:18 +0000 (0:00:00.370) 0:00:59.116 *********** 2025-06-02 19:55:18.978885 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:18.978993 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:18.979077 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:18.980256 | orchestrator | 2025-06-02 19:55:18.981404 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 19:55:18.982653 | orchestrator | Monday 02 June 2025 19:55:18 +0000 (0:00:00.160) 0:00:59.277 *********** 2025-06-02 19:55:19.127044 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:19.127147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:19.127276 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:19.127696 | orchestrator | 2025-06-02 19:55:19.128615 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 19:55:19.128982 | orchestrator | Monday 02 June 2025 19:55:19 +0000 (0:00:00.150) 0:00:59.427 *********** 2025-06-02 19:55:19.265555 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:19.265939 | orchestrator | 2025-06-02 19:55:19.267737 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 19:55:19.268633 | orchestrator | Monday 02 June 2025 19:55:19 +0000 (0:00:00.139) 0:00:59.567 *********** 2025-06-02 19:55:19.410777 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:19.411373 | orchestrator | 2025-06-02 19:55:19.412779 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 19:55:19.413539 | orchestrator | Monday 02 June 2025 19:55:19 +0000 (0:00:00.143) 0:00:59.711 *********** 2025-06-02 19:55:19.540488 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:19.540872 | orchestrator | 2025-06-02 19:55:19.541748 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 19:55:19.542514 | orchestrator | Monday 02 June 2025 19:55:19 +0000 (0:00:00.130) 0:00:59.842 *********** 2025-06-02 19:55:19.691899 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:55:19.692631 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 19:55:19.692942 | orchestrator | } 2025-06-02 19:55:19.694135 | orchestrator | 2025-06-02 19:55:19.695597 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 19:55:19.696077 | orchestrator | Monday 02 June 2025 19:55:19 +0000 (0:00:00.151) 0:00:59.993 *********** 2025-06-02 19:55:19.828792 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:55:19.829576 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 19:55:19.830457 | orchestrator | } 2025-06-02 19:55:19.831167 | orchestrator | 2025-06-02 19:55:19.832601 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 19:55:19.832699 | orchestrator | Monday 02 June 2025 19:55:19 +0000 (0:00:00.134) 0:01:00.127 *********** 2025-06-02 19:55:19.966847 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:55:19.967523 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 19:55:19.968644 | orchestrator | } 2025-06-02 19:55:19.969679 | orchestrator | 2025-06-02 19:55:19.970422 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 19:55:19.970949 | orchestrator | Monday 02 June 2025 19:55:19 +0000 (0:00:00.139) 0:01:00.267 *********** 2025-06-02 19:55:20.488374 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:20.491102 | orchestrator | 2025-06-02 19:55:20.491497 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 19:55:20.491532 | orchestrator | Monday 02 June 2025 19:55:20 +0000 (0:00:00.522) 0:01:00.790 *********** 2025-06-02 19:55:21.008857 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:21.008962 | orchestrator | 2025-06-02 19:55:21.008978 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 19:55:21.008991 | orchestrator | Monday 02 June 2025 19:55:20 +0000 (0:00:00.514) 0:01:01.304 *********** 2025-06-02 19:55:21.553693 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:21.554417 | orchestrator | 2025-06-02 19:55:21.555131 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 19:55:21.555173 | orchestrator | Monday 02 June 2025 19:55:21 +0000 (0:00:00.551) 0:01:01.855 *********** 2025-06-02 19:55:21.892802 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:21.892973 | orchestrator | 2025-06-02 19:55:21.894250 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 19:55:21.895106 | orchestrator | Monday 02 June 2025 19:55:21 +0000 (0:00:00.337) 0:01:02.193 *********** 2025-06-02 19:55:22.009533 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:22.009756 | orchestrator | 2025-06-02 19:55:22.011295 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 19:55:22.012661 | orchestrator | Monday 02 June 2025 19:55:22 +0000 (0:00:00.115) 0:01:02.309 *********** 2025-06-02 19:55:22.116158 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:22.116873 | orchestrator | 2025-06-02 19:55:22.117984 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 19:55:22.119093 | orchestrator | Monday 02 June 2025 19:55:22 +0000 (0:00:00.108) 0:01:02.417 *********** 2025-06-02 19:55:22.260219 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:55:22.261525 | orchestrator |  "vgs_report": { 2025-06-02 19:55:22.263251 | orchestrator |  "vg": [] 2025-06-02 19:55:22.264188 | orchestrator |  } 2025-06-02 19:55:22.265242 | orchestrator | } 2025-06-02 19:55:22.266561 | orchestrator | 2025-06-02 19:55:22.267736 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 19:55:22.268368 | orchestrator | Monday 02 June 2025 19:55:22 +0000 (0:00:00.143) 0:01:02.561 *********** 2025-06-02 19:55:22.383288 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:22.383678 | orchestrator | 2025-06-02 19:55:22.384499 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 19:55:22.384835 | orchestrator | Monday 02 June 2025 19:55:22 +0000 (0:00:00.123) 0:01:02.685 *********** 2025-06-02 19:55:22.562567 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:22.562804 | orchestrator | 2025-06-02 19:55:22.562899 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 19:55:22.563721 | orchestrator | Monday 02 June 2025 19:55:22 +0000 (0:00:00.178) 0:01:02.864 *********** 2025-06-02 19:55:22.696259 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:22.696521 | orchestrator | 2025-06-02 19:55:22.696788 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 19:55:22.697788 | orchestrator | Monday 02 June 2025 19:55:22 +0000 (0:00:00.134) 0:01:02.998 *********** 2025-06-02 19:55:22.821249 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:22.821898 | orchestrator | 2025-06-02 19:55:22.822793 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 19:55:22.823844 | orchestrator | Monday 02 June 2025 19:55:22 +0000 (0:00:00.124) 0:01:03.122 *********** 2025-06-02 19:55:22.955886 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:22.956798 | orchestrator | 2025-06-02 19:55:22.957474 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 19:55:22.958711 | orchestrator | Monday 02 June 2025 19:55:22 +0000 (0:00:00.134) 0:01:03.257 *********** 2025-06-02 19:55:23.078411 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:23.078663 | orchestrator | 2025-06-02 19:55:23.079233 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 19:55:23.080637 | orchestrator | Monday 02 June 2025 19:55:23 +0000 (0:00:00.120) 0:01:03.378 *********** 2025-06-02 19:55:23.212188 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:23.212634 | orchestrator | 2025-06-02 19:55:23.213353 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 19:55:23.214005 | orchestrator | Monday 02 June 2025 19:55:23 +0000 (0:00:00.136) 0:01:03.514 *********** 2025-06-02 19:55:23.363959 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:23.364834 | orchestrator | 2025-06-02 19:55:23.365493 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 19:55:23.366457 | orchestrator | Monday 02 June 2025 19:55:23 +0000 (0:00:00.151) 0:01:03.666 *********** 2025-06-02 19:55:23.714263 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:23.714982 | orchestrator | 2025-06-02 19:55:23.716468 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 19:55:23.717713 | orchestrator | Monday 02 June 2025 19:55:23 +0000 (0:00:00.349) 0:01:04.015 *********** 2025-06-02 19:55:23.858755 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:23.859547 | orchestrator | 2025-06-02 19:55:23.860288 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 19:55:23.861357 | orchestrator | Monday 02 June 2025 19:55:23 +0000 (0:00:00.144) 0:01:04.160 *********** 2025-06-02 19:55:23.997787 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:23.998066 | orchestrator | 2025-06-02 19:55:23.998559 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 19:55:23.998793 | orchestrator | Monday 02 June 2025 19:55:23 +0000 (0:00:00.139) 0:01:04.299 *********** 2025-06-02 19:55:24.144581 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:24.144693 | orchestrator | 2025-06-02 19:55:24.145269 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 19:55:24.146237 | orchestrator | Monday 02 June 2025 19:55:24 +0000 (0:00:00.145) 0:01:04.444 *********** 2025-06-02 19:55:24.286913 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:24.287585 | orchestrator | 2025-06-02 19:55:24.288625 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 19:55:24.289397 | orchestrator | Monday 02 June 2025 19:55:24 +0000 (0:00:00.143) 0:01:04.587 *********** 2025-06-02 19:55:24.443360 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:24.443624 | orchestrator | 2025-06-02 19:55:24.445256 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 19:55:24.445388 | orchestrator | Monday 02 June 2025 19:55:24 +0000 (0:00:00.157) 0:01:04.744 *********** 2025-06-02 19:55:24.596844 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:24.598510 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:24.599721 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:24.600979 | orchestrator | 2025-06-02 19:55:24.601622 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 19:55:24.602670 | orchestrator | Monday 02 June 2025 19:55:24 +0000 (0:00:00.153) 0:01:04.898 *********** 2025-06-02 19:55:24.751741 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:24.752303 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:24.753283 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:24.753998 | orchestrator | 2025-06-02 19:55:24.754852 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 19:55:24.756164 | orchestrator | Monday 02 June 2025 19:55:24 +0000 (0:00:00.155) 0:01:05.053 *********** 2025-06-02 19:55:24.925636 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:24.926619 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:24.927324 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:24.928227 | orchestrator | 2025-06-02 19:55:24.930384 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 19:55:24.930420 | orchestrator | Monday 02 June 2025 19:55:24 +0000 (0:00:00.171) 0:01:05.225 *********** 2025-06-02 19:55:25.082291 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:25.082944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:25.083616 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:25.084183 | orchestrator | 2025-06-02 19:55:25.085782 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 19:55:25.085806 | orchestrator | Monday 02 June 2025 19:55:25 +0000 (0:00:00.157) 0:01:05.382 *********** 2025-06-02 19:55:25.241478 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:25.241616 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:25.242709 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:25.242751 | orchestrator | 2025-06-02 19:55:25.243500 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 19:55:25.244931 | orchestrator | Monday 02 June 2025 19:55:25 +0000 (0:00:00.161) 0:01:05.543 *********** 2025-06-02 19:55:25.382361 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:25.382650 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:25.382813 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:25.383385 | orchestrator | 2025-06-02 19:55:25.383731 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 19:55:25.384219 | orchestrator | Monday 02 June 2025 19:55:25 +0000 (0:00:00.140) 0:01:05.684 *********** 2025-06-02 19:55:25.765253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:25.765497 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:25.765770 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:25.765841 | orchestrator | 2025-06-02 19:55:25.766168 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 19:55:25.766640 | orchestrator | Monday 02 June 2025 19:55:25 +0000 (0:00:00.382) 0:01:06.067 *********** 2025-06-02 19:55:25.925424 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:25.926638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:25.927584 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:25.928786 | orchestrator | 2025-06-02 19:55:25.929700 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 19:55:25.930564 | orchestrator | Monday 02 June 2025 19:55:25 +0000 (0:00:00.158) 0:01:06.225 *********** 2025-06-02 19:55:26.441041 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:26.441159 | orchestrator | 2025-06-02 19:55:26.441781 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 19:55:26.442395 | orchestrator | Monday 02 June 2025 19:55:26 +0000 (0:00:00.514) 0:01:06.740 *********** 2025-06-02 19:55:27.008860 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:27.009029 | orchestrator | 2025-06-02 19:55:27.009817 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 19:55:27.011092 | orchestrator | Monday 02 June 2025 19:55:27 +0000 (0:00:00.568) 0:01:07.309 *********** 2025-06-02 19:55:27.176410 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:27.177574 | orchestrator | 2025-06-02 19:55:27.178471 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 19:55:27.179868 | orchestrator | Monday 02 June 2025 19:55:27 +0000 (0:00:00.166) 0:01:07.476 *********** 2025-06-02 19:55:27.339874 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'vg_name': 'ceph-86208513-8fbd-535b-80fd-915c228be133'}) 2025-06-02 19:55:27.340084 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'vg_name': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'}) 2025-06-02 19:55:27.341421 | orchestrator | 2025-06-02 19:55:27.343248 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 19:55:27.343283 | orchestrator | Monday 02 June 2025 19:55:27 +0000 (0:00:00.164) 0:01:07.640 *********** 2025-06-02 19:55:27.518952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:27.519138 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:27.519464 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:27.520137 | orchestrator | 2025-06-02 19:55:27.521093 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 19:55:27.521921 | orchestrator | Monday 02 June 2025 19:55:27 +0000 (0:00:00.179) 0:01:07.820 *********** 2025-06-02 19:55:27.673028 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:27.673291 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:27.673968 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:27.674760 | orchestrator | 2025-06-02 19:55:27.675253 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 19:55:27.676647 | orchestrator | Monday 02 June 2025 19:55:27 +0000 (0:00:00.153) 0:01:07.974 *********** 2025-06-02 19:55:27.827065 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'})  2025-06-02 19:55:27.827681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'})  2025-06-02 19:55:27.828535 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:27.829171 | orchestrator | 2025-06-02 19:55:27.829773 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 19:55:27.830500 | orchestrator | Monday 02 June 2025 19:55:27 +0000 (0:00:00.154) 0:01:08.128 *********** 2025-06-02 19:55:27.967067 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 19:55:27.968210 | orchestrator |  "lvm_report": { 2025-06-02 19:55:27.969176 | orchestrator |  "lv": [ 2025-06-02 19:55:27.971169 | orchestrator |  { 2025-06-02 19:55:27.971409 | orchestrator |  "lv_name": "osd-block-86208513-8fbd-535b-80fd-915c228be133", 2025-06-02 19:55:27.972720 | orchestrator |  "vg_name": "ceph-86208513-8fbd-535b-80fd-915c228be133" 2025-06-02 19:55:27.973578 | orchestrator |  }, 2025-06-02 19:55:27.974249 | orchestrator |  { 2025-06-02 19:55:27.975049 | orchestrator |  "lv_name": "osd-block-ed769c7c-5756-52eb-9583-a607cefce370", 2025-06-02 19:55:27.976080 | orchestrator |  "vg_name": "ceph-ed769c7c-5756-52eb-9583-a607cefce370" 2025-06-02 19:55:27.976542 | orchestrator |  } 2025-06-02 19:55:27.977191 | orchestrator |  ], 2025-06-02 19:55:27.978936 | orchestrator |  "pv": [ 2025-06-02 19:55:27.979764 | orchestrator |  { 2025-06-02 19:55:27.980408 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 19:55:27.981216 | orchestrator |  "vg_name": "ceph-86208513-8fbd-535b-80fd-915c228be133" 2025-06-02 19:55:27.982119 | orchestrator |  }, 2025-06-02 19:55:27.982393 | orchestrator |  { 2025-06-02 19:55:27.982947 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 19:55:27.983777 | orchestrator |  "vg_name": "ceph-ed769c7c-5756-52eb-9583-a607cefce370" 2025-06-02 19:55:27.984394 | orchestrator |  } 2025-06-02 19:55:27.985229 | orchestrator |  ] 2025-06-02 19:55:27.985715 | orchestrator |  } 2025-06-02 19:55:27.986235 | orchestrator | } 2025-06-02 19:55:27.986659 | orchestrator | 2025-06-02 19:55:27.987232 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:55:27.987780 | orchestrator | 2025-06-02 19:55:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:55:27.987810 | orchestrator | 2025-06-02 19:55:27 | INFO  | Please wait and do not abort execution. 2025-06-02 19:55:27.988461 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 19:55:27.989250 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 19:55:27.989669 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 19:55:27.990344 | orchestrator | 2025-06-02 19:55:27.990709 | orchestrator | 2025-06-02 19:55:27.990958 | orchestrator | 2025-06-02 19:55:27.991615 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:55:27.991946 | orchestrator | Monday 02 June 2025 19:55:27 +0000 (0:00:00.139) 0:01:08.268 *********** 2025-06-02 19:55:27.992574 | orchestrator | =============================================================================== 2025-06-02 19:55:27.992964 | orchestrator | Create block VGs -------------------------------------------------------- 5.67s 2025-06-02 19:55:27.993407 | orchestrator | Create block LVs -------------------------------------------------------- 4.03s 2025-06-02 19:55:27.993927 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.81s 2025-06-02 19:55:27.994341 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2025-06-02 19:55:27.994711 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2025-06-02 19:55:27.995129 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2025-06-02 19:55:27.995583 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.49s 2025-06-02 19:55:27.996510 | orchestrator | Add known partitions to the list of available block devices ------------- 1.40s 2025-06-02 19:55:27.996536 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2025-06-02 19:55:27.996819 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2025-06-02 19:55:27.997058 | orchestrator | Print LVM report data --------------------------------------------------- 0.81s 2025-06-02 19:55:27.997469 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-06-02 19:55:27.997836 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-06-02 19:55:27.998536 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.69s 2025-06-02 19:55:27.999007 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.69s 2025-06-02 19:55:27.999518 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.67s 2025-06-02 19:55:27.999814 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.66s 2025-06-02 19:55:28.000535 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.63s 2025-06-02 19:55:28.000949 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.63s 2025-06-02 19:55:28.001636 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.63s 2025-06-02 19:55:30.410393 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:55:30.410527 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:55:30.410543 | orchestrator | Registering Redlock._release_script 2025-06-02 19:55:30.468228 | orchestrator | 2025-06-02 19:55:30 | INFO  | Task 6d1b31f7-7bb4-40b3-a033-2b0d5bf3ed17 (facts) was prepared for execution. 2025-06-02 19:55:30.468317 | orchestrator | 2025-06-02 19:55:30 | INFO  | It takes a moment until task 6d1b31f7-7bb4-40b3-a033-2b0d5bf3ed17 (facts) has been started and output is visible here. 2025-06-02 19:55:34.657781 | orchestrator | 2025-06-02 19:55:34.659459 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 19:55:34.659532 | orchestrator | 2025-06-02 19:55:34.661732 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 19:55:34.662884 | orchestrator | Monday 02 June 2025 19:55:34 +0000 (0:00:00.278) 0:00:00.278 *********** 2025-06-02 19:55:35.711903 | orchestrator | ok: [testbed-manager] 2025-06-02 19:55:35.715202 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:55:35.715280 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:55:35.716512 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:55:35.717021 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:55:35.719050 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:55:35.720141 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:35.720578 | orchestrator | 2025-06-02 19:55:35.721526 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 19:55:35.722146 | orchestrator | Monday 02 June 2025 19:55:35 +0000 (0:00:01.052) 0:00:01.330 *********** 2025-06-02 19:55:35.875022 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:55:35.955963 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:55:36.036666 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:55:36.116249 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:55:36.194865 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:36.908474 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:36.909151 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:36.913490 | orchestrator | 2025-06-02 19:55:36.913528 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 19:55:36.913542 | orchestrator | 2025-06-02 19:55:36.914509 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 19:55:36.915926 | orchestrator | Monday 02 June 2025 19:55:36 +0000 (0:00:01.200) 0:00:02.531 *********** 2025-06-02 19:55:41.779572 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:55:41.780760 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:55:41.784815 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:55:41.784848 | orchestrator | ok: [testbed-manager] 2025-06-02 19:55:41.784860 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:55:41.787573 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:55:41.787653 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:55:41.788454 | orchestrator | 2025-06-02 19:55:41.789610 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 19:55:41.790067 | orchestrator | 2025-06-02 19:55:41.791066 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 19:55:41.792232 | orchestrator | Monday 02 June 2025 19:55:41 +0000 (0:00:04.871) 0:00:07.403 *********** 2025-06-02 19:55:41.935129 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:55:42.009632 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:55:42.083027 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:55:42.162150 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:55:42.238078 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:55:42.286485 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:55:42.286543 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:55:42.286556 | orchestrator | 2025-06-02 19:55:42.287493 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:55:42.287888 | orchestrator | 2025-06-02 19:55:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 19:55:42.288916 | orchestrator | 2025-06-02 19:55:42 | INFO  | Please wait and do not abort execution. 2025-06-02 19:55:42.289797 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:42.290560 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:42.291412 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:42.292263 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:42.293113 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:42.293663 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:42.294337 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 19:55:42.295046 | orchestrator | 2025-06-02 19:55:42.295635 | orchestrator | 2025-06-02 19:55:42.296207 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:55:42.297641 | orchestrator | Monday 02 June 2025 19:55:42 +0000 (0:00:00.505) 0:00:07.908 *********** 2025-06-02 19:55:42.298274 | orchestrator | =============================================================================== 2025-06-02 19:55:42.298955 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.87s 2025-06-02 19:55:42.299422 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2025-06-02 19:55:42.299828 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.05s 2025-06-02 19:55:42.300556 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-06-02 19:55:42.898806 | orchestrator | 2025-06-02 19:55:42.901116 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Jun 2 19:55:42 UTC 2025 2025-06-02 19:55:42.901174 | orchestrator | 2025-06-02 19:55:44.572929 | orchestrator | 2025-06-02 19:55:44 | INFO  | Collection nutshell is prepared for execution 2025-06-02 19:55:44.573015 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [0] - dotfiles 2025-06-02 19:55:44.577769 | orchestrator | Registering Redlock._acquired_script 2025-06-02 19:55:44.577850 | orchestrator | Registering Redlock._extend_script 2025-06-02 19:55:44.578098 | orchestrator | Registering Redlock._release_script 2025-06-02 19:55:44.582074 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [0] - homer 2025-06-02 19:55:44.582100 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [0] - netdata 2025-06-02 19:55:44.582146 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [0] - openstackclient 2025-06-02 19:55:44.582716 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [0] - phpmyadmin 2025-06-02 19:55:44.582759 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [0] - common 2025-06-02 19:55:44.584519 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [1] -- loadbalancer 2025-06-02 19:55:44.584741 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [2] --- opensearch 2025-06-02 19:55:44.584758 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [2] --- mariadb-ng 2025-06-02 19:55:44.584923 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [3] ---- horizon 2025-06-02 19:55:44.584941 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [3] ---- keystone 2025-06-02 19:55:44.585156 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [4] ----- neutron 2025-06-02 19:55:44.585173 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [5] ------ wait-for-nova 2025-06-02 19:55:44.585456 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [5] ------ octavia 2025-06-02 19:55:44.586132 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [4] ----- barbican 2025-06-02 19:55:44.586643 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [4] ----- designate 2025-06-02 19:55:44.586663 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [4] ----- ironic 2025-06-02 19:55:44.586740 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [4] ----- placement 2025-06-02 19:55:44.586753 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [4] ----- magnum 2025-06-02 19:55:44.587053 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [1] -- openvswitch 2025-06-02 19:55:44.587116 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [2] --- ovn 2025-06-02 19:55:44.587739 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [1] -- memcached 2025-06-02 19:55:44.587756 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [1] -- redis 2025-06-02 19:55:44.587812 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [1] -- rabbitmq-ng 2025-06-02 19:55:44.588040 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [0] - kubernetes 2025-06-02 19:55:44.590288 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [1] -- kubeconfig 2025-06-02 19:55:44.590378 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [1] -- copy-kubeconfig 2025-06-02 19:55:44.590521 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [0] - ceph 2025-06-02 19:55:44.595080 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [1] -- ceph-pools 2025-06-02 19:55:44.595125 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [2] --- copy-ceph-keys 2025-06-02 19:55:44.595137 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [3] ---- cephclient 2025-06-02 19:55:44.595148 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-02 19:55:44.595159 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [4] ----- wait-for-keystone 2025-06-02 19:55:44.595170 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-02 19:55:44.595180 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [5] ------ glance 2025-06-02 19:55:44.595191 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [5] ------ cinder 2025-06-02 19:55:44.595203 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [5] ------ nova 2025-06-02 19:55:44.595221 | orchestrator | 2025-06-02 19:55:44 | INFO  | A [4] ----- prometheus 2025-06-02 19:55:44.595240 | orchestrator | 2025-06-02 19:55:44 | INFO  | D [5] ------ grafana 2025-06-02 19:55:44.809065 | orchestrator | 2025-06-02 19:55:44 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-02 19:55:44.809160 | orchestrator | 2025-06-02 19:55:44 | INFO  | Tasks are running in the background 2025-06-02 19:55:47.455861 | orchestrator | 2025-06-02 19:55:47 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-02 19:55:49.568743 | orchestrator | 2025-06-02 19:55:49 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:55:49.569032 | orchestrator | 2025-06-02 19:55:49 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:55:49.569529 | orchestrator | 2025-06-02 19:55:49 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:55:49.571075 | orchestrator | 2025-06-02 19:55:49 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:55:49.573050 | orchestrator | 2025-06-02 19:55:49 | INFO  | Task 7f248acf-4c28-4403-956b-ae16a970f532 is in state STARTED 2025-06-02 19:55:49.573369 | orchestrator | 2025-06-02 19:55:49 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:55:49.575668 | orchestrator | 2025-06-02 19:55:49 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:55:49.575702 | orchestrator | 2025-06-02 19:55:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:55:52.648481 | orchestrator | 2025-06-02 19:55:52 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:55:52.651510 | orchestrator | 2025-06-02 19:55:52 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:55:52.651567 | orchestrator | 2025-06-02 19:55:52 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:55:52.651580 | orchestrator | 2025-06-02 19:55:52 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:55:52.651591 | orchestrator | 2025-06-02 19:55:52 | INFO  | Task 7f248acf-4c28-4403-956b-ae16a970f532 is in state STARTED 2025-06-02 19:55:52.651911 | orchestrator | 2025-06-02 19:55:52 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:55:52.658504 | orchestrator | 2025-06-02 19:55:52 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:55:52.658569 | orchestrator | 2025-06-02 19:55:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:55:55.684496 | orchestrator | 2025-06-02 19:55:55 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:55:55.684692 | orchestrator | 2025-06-02 19:55:55 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:55:55.684966 | orchestrator | 2025-06-02 19:55:55 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:55:55.686145 | orchestrator | 2025-06-02 19:55:55 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:55:55.687210 | orchestrator | 2025-06-02 19:55:55 | INFO  | Task 7f248acf-4c28-4403-956b-ae16a970f532 is in state STARTED 2025-06-02 19:55:55.688249 | orchestrator | 2025-06-02 19:55:55 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:55:55.689319 | orchestrator | 2025-06-02 19:55:55 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:55:55.689360 | orchestrator | 2025-06-02 19:55:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:55:58.732606 | orchestrator | 2025-06-02 19:55:58 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:55:58.735142 | orchestrator | 2025-06-02 19:55:58 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:55:58.735513 | orchestrator | 2025-06-02 19:55:58 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:55:58.739903 | orchestrator | 2025-06-02 19:55:58 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:55:58.740010 | orchestrator | 2025-06-02 19:55:58 | INFO  | Task 7f248acf-4c28-4403-956b-ae16a970f532 is in state STARTED 2025-06-02 19:55:58.740027 | orchestrator | 2025-06-02 19:55:58 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:55:58.744209 | orchestrator | 2025-06-02 19:55:58 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:55:58.744241 | orchestrator | 2025-06-02 19:55:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:01.803500 | orchestrator | 2025-06-02 19:56:01 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:01.803601 | orchestrator | 2025-06-02 19:56:01 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:56:01.804745 | orchestrator | 2025-06-02 19:56:01 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:01.810200 | orchestrator | 2025-06-02 19:56:01 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:01.811562 | orchestrator | 2025-06-02 19:56:01 | INFO  | Task 7f248acf-4c28-4403-956b-ae16a970f532 is in state STARTED 2025-06-02 19:56:01.821714 | orchestrator | 2025-06-02 19:56:01 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:01.821791 | orchestrator | 2025-06-02 19:56:01 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:01.821806 | orchestrator | 2025-06-02 19:56:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:04.895230 | orchestrator | 2025-06-02 19:56:04 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:04.897619 | orchestrator | 2025-06-02 19:56:04 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:56:04.898685 | orchestrator | 2025-06-02 19:56:04 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:04.902742 | orchestrator | 2025-06-02 19:56:04 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:04.902792 | orchestrator | 2025-06-02 19:56:04 | INFO  | Task 7f248acf-4c28-4403-956b-ae16a970f532 is in state STARTED 2025-06-02 19:56:04.905527 | orchestrator | 2025-06-02 19:56:04 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:04.908247 | orchestrator | 2025-06-02 19:56:04 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:04.908325 | orchestrator | 2025-06-02 19:56:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:07.954206 | orchestrator | 2025-06-02 19:56:07 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:07.956327 | orchestrator | 2025-06-02 19:56:07 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:56:07.959627 | orchestrator | 2025-06-02 19:56:07 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:07.961681 | orchestrator | 2025-06-02 19:56:07 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:07.962275 | orchestrator | 2025-06-02 19:56:07 | INFO  | Task 7f248acf-4c28-4403-956b-ae16a970f532 is in state STARTED 2025-06-02 19:56:07.963591 | orchestrator | 2025-06-02 19:56:07 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:07.964580 | orchestrator | 2025-06-02 19:56:07 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:07.964605 | orchestrator | 2025-06-02 19:56:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:11.078989 | orchestrator | 2025-06-02 19:56:11 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:11.079095 | orchestrator | 2025-06-02 19:56:11 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:56:11.079111 | orchestrator | 2025-06-02 19:56:11 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:11.079804 | orchestrator | 2025-06-02 19:56:11 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:11.082823 | orchestrator | 2025-06-02 19:56:11 | INFO  | Task 7f248acf-4c28-4403-956b-ae16a970f532 is in state STARTED 2025-06-02 19:56:11.082882 | orchestrator | 2025-06-02 19:56:11 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:11.088523 | orchestrator | 2025-06-02 19:56:11 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:11.088576 | orchestrator | 2025-06-02 19:56:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:14.146626 | orchestrator | 2025-06-02 19:56:14 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:14.148243 | orchestrator | 2025-06-02 19:56:14 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:56:14.150100 | orchestrator | 2025-06-02 19:56:14 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:14.153025 | orchestrator | 2025-06-02 19:56:14 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:14.154101 | orchestrator | 2025-06-02 19:56:14.154122 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-02 19:56:14.154128 | orchestrator | 2025-06-02 19:56:14.154133 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-02 19:56:14.154137 | orchestrator | Monday 02 June 2025 19:55:57 +0000 (0:00:00.909) 0:00:00.909 *********** 2025-06-02 19:56:14.154141 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:56:14.154147 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:14.154151 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:56:14.154155 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:56:14.154158 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:56:14.154162 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:56:14.154166 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:56:14.154170 | orchestrator | 2025-06-02 19:56:14.154174 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-02 19:56:14.154178 | orchestrator | Monday 02 June 2025 19:56:01 +0000 (0:00:04.511) 0:00:05.421 *********** 2025-06-02 19:56:14.154183 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 19:56:14.154187 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 19:56:14.154191 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 19:56:14.154194 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 19:56:14.154199 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 19:56:14.154203 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 19:56:14.154206 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 19:56:14.154210 | orchestrator | 2025-06-02 19:56:14.154214 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-02 19:56:14.154218 | orchestrator | Monday 02 June 2025 19:56:04 +0000 (0:00:02.872) 0:00:08.294 *********** 2025-06-02 19:56:14.154230 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:56:02.812468', 'end': '2025-06-02 19:56:02.820647', 'delta': '0:00:00.008179', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:56:14.154255 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:56:02.853785', 'end': '2025-06-02 19:56:02.864456', 'delta': '0:00:00.010671', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:56:14.154262 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:56:02.823113', 'end': '2025-06-02 19:56:02.827807', 'delta': '0:00:00.004694', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:56:14.154275 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:56:03.442439', 'end': '2025-06-02 19:56:03.451047', 'delta': '0:00:00.008608', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:56:14.154279 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:56:03.791397', 'end': '2025-06-02 19:56:03.799789', 'delta': '0:00:00.008392', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:56:14.154285 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:56:04.227130', 'end': '2025-06-02 19:56:04.234409', 'delta': '0:00:00.007279', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:56:14.154296 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 19:56:04.520870', 'end': '2025-06-02 19:56:04.530348', 'delta': '0:00:00.009478', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 19:56:14.154300 | orchestrator | 2025-06-02 19:56:14.154304 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-02 19:56:14.154308 | orchestrator | Monday 02 June 2025 19:56:06 +0000 (0:00:01.975) 0:00:10.269 *********** 2025-06-02 19:56:14.154312 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 19:56:14.154315 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 19:56:14.154319 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 19:56:14.154323 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 19:56:14.154326 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 19:56:14.154330 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 19:56:14.154334 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 19:56:14.154337 | orchestrator | 2025-06-02 19:56:14.154341 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-02 19:56:14.154345 | orchestrator | Monday 02 June 2025 19:56:08 +0000 (0:00:01.600) 0:00:11.870 *********** 2025-06-02 19:56:14.154349 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-02 19:56:14.154352 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 19:56:14.154356 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 19:56:14.154360 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 19:56:14.154364 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 19:56:14.154367 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 19:56:14.154371 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 19:56:14.154375 | orchestrator | 2025-06-02 19:56:14.154378 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:56:14.154386 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:14.154392 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:14.154396 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:14.154400 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:14.154403 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:14.154411 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:14.154414 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:14.154418 | orchestrator | 2025-06-02 19:56:14.154449 | orchestrator | 2025-06-02 19:56:14.154453 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:56:14.154457 | orchestrator | Monday 02 June 2025 19:56:12 +0000 (0:00:03.872) 0:00:15.742 *********** 2025-06-02 19:56:14.154461 | orchestrator | =============================================================================== 2025-06-02 19:56:14.154464 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.51s 2025-06-02 19:56:14.154468 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.87s 2025-06-02 19:56:14.154472 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.87s 2025-06-02 19:56:14.154476 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.98s 2025-06-02 19:56:14.154480 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.60s 2025-06-02 19:56:14.154494 | orchestrator | 2025-06-02 19:56:14 | INFO  | Task 7f248acf-4c28-4403-956b-ae16a970f532 is in state SUCCESS 2025-06-02 19:56:14.157562 | orchestrator | 2025-06-02 19:56:14 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:14.162132 | orchestrator | 2025-06-02 19:56:14 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:14.165490 | orchestrator | 2025-06-02 19:56:14 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:14.165734 | orchestrator | 2025-06-02 19:56:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:17.201253 | orchestrator | 2025-06-02 19:56:17 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:17.201362 | orchestrator | 2025-06-02 19:56:17 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:56:17.201377 | orchestrator | 2025-06-02 19:56:17 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:17.201524 | orchestrator | 2025-06-02 19:56:17 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:17.202587 | orchestrator | 2025-06-02 19:56:17 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:17.203037 | orchestrator | 2025-06-02 19:56:17 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:17.203861 | orchestrator | 2025-06-02 19:56:17 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:17.203887 | orchestrator | 2025-06-02 19:56:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:20.256778 | orchestrator | 2025-06-02 19:56:20 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:20.257613 | orchestrator | 2025-06-02 19:56:20 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:56:20.267704 | orchestrator | 2025-06-02 19:56:20 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:20.267785 | orchestrator | 2025-06-02 19:56:20 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:20.271646 | orchestrator | 2025-06-02 19:56:20 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:20.271766 | orchestrator | 2025-06-02 19:56:20 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:20.271782 | orchestrator | 2025-06-02 19:56:20 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:20.271795 | orchestrator | 2025-06-02 19:56:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:23.356293 | orchestrator | 2025-06-02 19:56:23 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:23.358204 | orchestrator | 2025-06-02 19:56:23 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:56:23.358246 | orchestrator | 2025-06-02 19:56:23 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:23.358749 | orchestrator | 2025-06-02 19:56:23 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:23.359622 | orchestrator | 2025-06-02 19:56:23 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:23.364688 | orchestrator | 2025-06-02 19:56:23 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:23.369469 | orchestrator | 2025-06-02 19:56:23 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:23.369528 | orchestrator | 2025-06-02 19:56:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:26.422840 | orchestrator | 2025-06-02 19:56:26 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:26.425809 | orchestrator | 2025-06-02 19:56:26 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state STARTED 2025-06-02 19:56:26.425880 | orchestrator | 2025-06-02 19:56:26 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:26.426685 | orchestrator | 2025-06-02 19:56:26 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:26.427147 | orchestrator | 2025-06-02 19:56:26 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:26.429218 | orchestrator | 2025-06-02 19:56:26 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:26.429888 | orchestrator | 2025-06-02 19:56:26 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:26.430138 | orchestrator | 2025-06-02 19:56:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:29.468192 | orchestrator | 2025-06-02 19:56:29 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:29.468815 | orchestrator | 2025-06-02 19:56:29 | INFO  | Task eebe13b4-9676-4fca-984c-a8f0383ce104 is in state SUCCESS 2025-06-02 19:56:29.471769 | orchestrator | 2025-06-02 19:56:29 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:29.471812 | orchestrator | 2025-06-02 19:56:29 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:29.472107 | orchestrator | 2025-06-02 19:56:29 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:29.474263 | orchestrator | 2025-06-02 19:56:29 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:29.475955 | orchestrator | 2025-06-02 19:56:29 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:29.475988 | orchestrator | 2025-06-02 19:56:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:32.530811 | orchestrator | 2025-06-02 19:56:32 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:32.530921 | orchestrator | 2025-06-02 19:56:32 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:32.538291 | orchestrator | 2025-06-02 19:56:32 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:32.540287 | orchestrator | 2025-06-02 19:56:32 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:32.543819 | orchestrator | 2025-06-02 19:56:32 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:32.548907 | orchestrator | 2025-06-02 19:56:32 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:32.548970 | orchestrator | 2025-06-02 19:56:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:35.619863 | orchestrator | 2025-06-02 19:56:35 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:35.620036 | orchestrator | 2025-06-02 19:56:35 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:35.622495 | orchestrator | 2025-06-02 19:56:35 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:35.623431 | orchestrator | 2025-06-02 19:56:35 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:35.624785 | orchestrator | 2025-06-02 19:56:35 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:35.626648 | orchestrator | 2025-06-02 19:56:35 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:35.626692 | orchestrator | 2025-06-02 19:56:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:38.675910 | orchestrator | 2025-06-02 19:56:38 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:38.676022 | orchestrator | 2025-06-02 19:56:38 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:38.677520 | orchestrator | 2025-06-02 19:56:38 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:38.678741 | orchestrator | 2025-06-02 19:56:38 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:38.680996 | orchestrator | 2025-06-02 19:56:38 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:38.685806 | orchestrator | 2025-06-02 19:56:38 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:38.685883 | orchestrator | 2025-06-02 19:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:41.760592 | orchestrator | 2025-06-02 19:56:41 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state STARTED 2025-06-02 19:56:41.760732 | orchestrator | 2025-06-02 19:56:41 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:41.765292 | orchestrator | 2025-06-02 19:56:41 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:41.765592 | orchestrator | 2025-06-02 19:56:41 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:41.767035 | orchestrator | 2025-06-02 19:56:41 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:41.770644 | orchestrator | 2025-06-02 19:56:41 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:41.770684 | orchestrator | 2025-06-02 19:56:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:44.803062 | orchestrator | 2025-06-02 19:56:44 | INFO  | Task f528be20-3cbf-406e-85c6-261e3e1f1534 is in state SUCCESS 2025-06-02 19:56:44.803265 | orchestrator | 2025-06-02 19:56:44 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:44.803839 | orchestrator | 2025-06-02 19:56:44 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:44.804588 | orchestrator | 2025-06-02 19:56:44 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:44.805469 | orchestrator | 2025-06-02 19:56:44 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:44.806229 | orchestrator | 2025-06-02 19:56:44 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:44.806278 | orchestrator | 2025-06-02 19:56:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:47.846311 | orchestrator | 2025-06-02 19:56:47 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:47.848223 | orchestrator | 2025-06-02 19:56:47 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:47.849336 | orchestrator | 2025-06-02 19:56:47 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:47.849694 | orchestrator | 2025-06-02 19:56:47 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:47.851238 | orchestrator | 2025-06-02 19:56:47 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:47.852154 | orchestrator | 2025-06-02 19:56:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:50.897312 | orchestrator | 2025-06-02 19:56:50 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:50.898164 | orchestrator | 2025-06-02 19:56:50 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:50.900803 | orchestrator | 2025-06-02 19:56:50 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:50.903886 | orchestrator | 2025-06-02 19:56:50 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:50.903956 | orchestrator | 2025-06-02 19:56:50 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:50.904664 | orchestrator | 2025-06-02 19:56:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:53.937930 | orchestrator | 2025-06-02 19:56:53 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state STARTED 2025-06-02 19:56:53.938569 | orchestrator | 2025-06-02 19:56:53 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:53.943803 | orchestrator | 2025-06-02 19:56:53 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:53.949407 | orchestrator | 2025-06-02 19:56:53 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:53.949535 | orchestrator | 2025-06-02 19:56:53 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:53.949548 | orchestrator | 2025-06-02 19:56:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:56:56.991735 | orchestrator | 2025-06-02 19:56:56.991819 | orchestrator | 2025-06-02 19:56:56.991835 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-02 19:56:56.991844 | orchestrator | 2025-06-02 19:56:56.991852 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-02 19:56:56.991861 | orchestrator | Monday 02 June 2025 19:55:54 +0000 (0:00:00.183) 0:00:00.183 *********** 2025-06-02 19:56:56.991868 | orchestrator | ok: [testbed-manager] => { 2025-06-02 19:56:56.991877 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-02 19:56:56.991901 | orchestrator | } 2025-06-02 19:56:56.991909 | orchestrator | 2025-06-02 19:56:56.991916 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-02 19:56:56.991924 | orchestrator | Monday 02 June 2025 19:55:55 +0000 (0:00:00.218) 0:00:00.401 *********** 2025-06-02 19:56:56.991931 | orchestrator | ok: [testbed-manager] 2025-06-02 19:56:56.991939 | orchestrator | 2025-06-02 19:56:56.991946 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-02 19:56:56.991953 | orchestrator | Monday 02 June 2025 19:55:56 +0000 (0:00:01.571) 0:00:01.972 *********** 2025-06-02 19:56:56.991960 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-02 19:56:56.991967 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-02 19:56:56.991975 | orchestrator | 2025-06-02 19:56:56.991983 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-02 19:56:56.991990 | orchestrator | Monday 02 June 2025 19:55:58 +0000 (0:00:01.640) 0:00:03.613 *********** 2025-06-02 19:56:56.991997 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.992004 | orchestrator | 2025-06-02 19:56:56.992011 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-02 19:56:56.992018 | orchestrator | Monday 02 June 2025 19:56:00 +0000 (0:00:02.431) 0:00:06.044 *********** 2025-06-02 19:56:56.992025 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.992032 | orchestrator | 2025-06-02 19:56:56.992039 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-02 19:56:56.992046 | orchestrator | Monday 02 June 2025 19:56:01 +0000 (0:00:01.103) 0:00:07.148 *********** 2025-06-02 19:56:56.992053 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-02 19:56:56.992060 | orchestrator | ok: [testbed-manager] 2025-06-02 19:56:56.992067 | orchestrator | 2025-06-02 19:56:56.992074 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-02 19:56:56.992081 | orchestrator | Monday 02 June 2025 19:56:26 +0000 (0:00:24.134) 0:00:31.283 *********** 2025-06-02 19:56:56.992088 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.992095 | orchestrator | 2025-06-02 19:56:56.992102 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:56:56.992109 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:56.992119 | orchestrator | 2025-06-02 19:56:56.992126 | orchestrator | 2025-06-02 19:56:56.992133 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:56:56.992140 | orchestrator | Monday 02 June 2025 19:56:27 +0000 (0:00:01.617) 0:00:32.901 *********** 2025-06-02 19:56:56.992147 | orchestrator | =============================================================================== 2025-06-02 19:56:56.992154 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.13s 2025-06-02 19:56:56.992161 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.43s 2025-06-02 19:56:56.992168 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.64s 2025-06-02 19:56:56.992175 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.62s 2025-06-02 19:56:56.992182 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.57s 2025-06-02 19:56:56.992189 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.10s 2025-06-02 19:56:56.992196 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.22s 2025-06-02 19:56:56.992203 | orchestrator | 2025-06-02 19:56:56.992210 | orchestrator | 2025-06-02 19:56:56.992217 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-02 19:56:56.992224 | orchestrator | 2025-06-02 19:56:56.992231 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-02 19:56:56.992238 | orchestrator | Monday 02 June 2025 19:55:56 +0000 (0:00:00.597) 0:00:00.597 *********** 2025-06-02 19:56:56.992251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-02 19:56:56.992259 | orchestrator | 2025-06-02 19:56:56.992266 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-02 19:56:56.992273 | orchestrator | Monday 02 June 2025 19:55:57 +0000 (0:00:00.455) 0:00:01.053 *********** 2025-06-02 19:56:56.992280 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-02 19:56:56.992287 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-02 19:56:56.992294 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-02 19:56:56.992303 | orchestrator | 2025-06-02 19:56:56.992311 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-02 19:56:56.992319 | orchestrator | Monday 02 June 2025 19:55:59 +0000 (0:00:02.047) 0:00:03.100 *********** 2025-06-02 19:56:56.992327 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.992335 | orchestrator | 2025-06-02 19:56:56.992344 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-02 19:56:56.992352 | orchestrator | Monday 02 June 2025 19:56:00 +0000 (0:00:01.511) 0:00:04.611 *********** 2025-06-02 19:56:56.992393 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-02 19:56:56.992402 | orchestrator | ok: [testbed-manager] 2025-06-02 19:56:56.992410 | orchestrator | 2025-06-02 19:56:56.992421 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-02 19:56:56.992430 | orchestrator | Monday 02 June 2025 19:56:37 +0000 (0:00:36.461) 0:00:41.073 *********** 2025-06-02 19:56:56.992438 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.992446 | orchestrator | 2025-06-02 19:56:56.992454 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-02 19:56:56.992462 | orchestrator | Monday 02 June 2025 19:56:37 +0000 (0:00:00.725) 0:00:41.799 *********** 2025-06-02 19:56:56.992469 | orchestrator | ok: [testbed-manager] 2025-06-02 19:56:56.992478 | orchestrator | 2025-06-02 19:56:56.992485 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-02 19:56:56.992494 | orchestrator | Monday 02 June 2025 19:56:38 +0000 (0:00:00.513) 0:00:42.312 *********** 2025-06-02 19:56:56.992502 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.992509 | orchestrator | 2025-06-02 19:56:56.992517 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-02 19:56:56.992526 | orchestrator | Monday 02 June 2025 19:56:40 +0000 (0:00:02.011) 0:00:44.324 *********** 2025-06-02 19:56:56.992533 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.992542 | orchestrator | 2025-06-02 19:56:56.992550 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-02 19:56:56.992557 | orchestrator | Monday 02 June 2025 19:56:40 +0000 (0:00:00.595) 0:00:44.919 *********** 2025-06-02 19:56:56.992565 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.992574 | orchestrator | 2025-06-02 19:56:56.992582 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-02 19:56:56.992590 | orchestrator | Monday 02 June 2025 19:56:41 +0000 (0:00:00.621) 0:00:45.541 *********** 2025-06-02 19:56:56.992598 | orchestrator | ok: [testbed-manager] 2025-06-02 19:56:56.992606 | orchestrator | 2025-06-02 19:56:56.992614 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:56:56.992622 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:56.992630 | orchestrator | 2025-06-02 19:56:56.992639 | orchestrator | 2025-06-02 19:56:56.992647 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:56:56.992656 | orchestrator | Monday 02 June 2025 19:56:41 +0000 (0:00:00.391) 0:00:45.933 *********** 2025-06-02 19:56:56.992664 | orchestrator | =============================================================================== 2025-06-02 19:56:56.992677 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.46s 2025-06-02 19:56:56.992684 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.05s 2025-06-02 19:56:56.992691 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.01s 2025-06-02 19:56:56.992698 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.51s 2025-06-02 19:56:56.992705 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.73s 2025-06-02 19:56:56.992712 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.62s 2025-06-02 19:56:56.992719 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.60s 2025-06-02 19:56:56.992726 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.51s 2025-06-02 19:56:56.992733 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.46s 2025-06-02 19:56:56.992740 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.39s 2025-06-02 19:56:56.992747 | orchestrator | 2025-06-02 19:56:56.992754 | orchestrator | 2025-06-02 19:56:56.992761 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 19:56:56.992768 | orchestrator | 2025-06-02 19:56:56.992775 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 19:56:56.992782 | orchestrator | Monday 02 June 2025 19:55:55 +0000 (0:00:00.198) 0:00:00.198 *********** 2025-06-02 19:56:56.992789 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-02 19:56:56.992796 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-02 19:56:56.992803 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-02 19:56:56.992810 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-02 19:56:56.992817 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-02 19:56:56.992824 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-02 19:56:56.992831 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-02 19:56:56.992838 | orchestrator | 2025-06-02 19:56:56.992845 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-02 19:56:56.992851 | orchestrator | 2025-06-02 19:56:56.992858 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-02 19:56:56.992865 | orchestrator | Monday 02 June 2025 19:55:56 +0000 (0:00:01.370) 0:00:01.569 *********** 2025-06-02 19:56:56.992883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:56:56.992896 | orchestrator | 2025-06-02 19:56:56.992904 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-02 19:56:56.992911 | orchestrator | Monday 02 June 2025 19:55:59 +0000 (0:00:03.459) 0:00:05.028 *********** 2025-06-02 19:56:56.992918 | orchestrator | ok: [testbed-manager] 2025-06-02 19:56:56.992925 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:56:56.992932 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:56:56.992939 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:56:56.992946 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:56:56.992957 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:56:56.992965 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:56:56.992972 | orchestrator | 2025-06-02 19:56:56.992982 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-02 19:56:56.992989 | orchestrator | Monday 02 June 2025 19:56:01 +0000 (0:00:01.975) 0:00:07.003 *********** 2025-06-02 19:56:56.992996 | orchestrator | ok: [testbed-manager] 2025-06-02 19:56:56.993003 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:56:56.993010 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:56:56.993017 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:56:56.993024 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:56:56.993039 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:56:56.993046 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:56:56.993053 | orchestrator | 2025-06-02 19:56:56.993060 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-02 19:56:56.993068 | orchestrator | Monday 02 June 2025 19:56:06 +0000 (0:00:04.394) 0:00:11.397 *********** 2025-06-02 19:56:56.993075 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.993082 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:56:56.993089 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:56:56.993096 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:56:56.993103 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:56:56.993110 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:56:56.993117 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:56:56.993124 | orchestrator | 2025-06-02 19:56:56.993131 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-02 19:56:56.993138 | orchestrator | Monday 02 June 2025 19:56:08 +0000 (0:00:02.527) 0:00:13.925 *********** 2025-06-02 19:56:56.993144 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.993151 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:56:56.993158 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:56:56.993165 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:56:56.993172 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:56:56.993179 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:56:56.993186 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:56:56.993193 | orchestrator | 2025-06-02 19:56:56.993200 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-02 19:56:56.993207 | orchestrator | Monday 02 June 2025 19:56:18 +0000 (0:00:10.089) 0:00:24.014 *********** 2025-06-02 19:56:56.993214 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:56:56.993221 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:56:56.993228 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:56:56.993235 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:56:56.993242 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.993249 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:56:56.993256 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:56:56.993263 | orchestrator | 2025-06-02 19:56:56.993270 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-02 19:56:56.993277 | orchestrator | Monday 02 June 2025 19:56:36 +0000 (0:00:17.968) 0:00:41.982 *********** 2025-06-02 19:56:56.993285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:56:56.993294 | orchestrator | 2025-06-02 19:56:56.993301 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-02 19:56:56.993308 | orchestrator | Monday 02 June 2025 19:56:38 +0000 (0:00:01.224) 0:00:43.207 *********** 2025-06-02 19:56:56.993315 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-02 19:56:56.993322 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-02 19:56:56.993329 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-02 19:56:56.993336 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-02 19:56:56.993343 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-02 19:56:56.993350 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-02 19:56:56.993371 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-02 19:56:56.993378 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-02 19:56:56.993386 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-02 19:56:56.993393 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-02 19:56:56.993399 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-02 19:56:56.993406 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-02 19:56:56.993419 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-02 19:56:56.993426 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-02 19:56:56.993433 | orchestrator | 2025-06-02 19:56:56.993440 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-02 19:56:56.993447 | orchestrator | Monday 02 June 2025 19:56:42 +0000 (0:00:04.849) 0:00:48.056 *********** 2025-06-02 19:56:56.993454 | orchestrator | ok: [testbed-manager] 2025-06-02 19:56:56.993461 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:56:56.993468 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:56:56.993475 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:56:56.993482 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:56:56.993489 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:56:56.993496 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:56:56.993503 | orchestrator | 2025-06-02 19:56:56.993511 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-02 19:56:56.993522 | orchestrator | Monday 02 June 2025 19:56:43 +0000 (0:00:00.958) 0:00:49.015 *********** 2025-06-02 19:56:56.993554 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.993566 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:56:56.993578 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:56:56.993589 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:56:56.993599 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:56:56.993606 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:56:56.993613 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:56:56.993620 | orchestrator | 2025-06-02 19:56:56.993627 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-02 19:56:56.993639 | orchestrator | Monday 02 June 2025 19:56:45 +0000 (0:00:01.238) 0:00:50.253 *********** 2025-06-02 19:56:56.993647 | orchestrator | ok: [testbed-manager] 2025-06-02 19:56:56.993654 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:56:56.993661 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:56:56.993668 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:56:56.993675 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:56:56.993682 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:56:56.993689 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:56:56.993696 | orchestrator | 2025-06-02 19:56:56.993727 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-02 19:56:56.993736 | orchestrator | Monday 02 June 2025 19:56:46 +0000 (0:00:01.546) 0:00:51.800 *********** 2025-06-02 19:56:56.993743 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:56:56.993750 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:56:56.993757 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:56:56.993764 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:56:56.993771 | orchestrator | ok: [testbed-manager] 2025-06-02 19:56:56.993778 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:56:56.993787 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:56:56.993799 | orchestrator | 2025-06-02 19:56:56.993811 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-02 19:56:56.993824 | orchestrator | Monday 02 June 2025 19:56:48 +0000 (0:00:01.866) 0:00:53.667 *********** 2025-06-02 19:56:56.993832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-02 19:56:56.993840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:56:56.993848 | orchestrator | 2025-06-02 19:56:56.993855 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-02 19:56:56.993862 | orchestrator | Monday 02 June 2025 19:56:50 +0000 (0:00:01.447) 0:00:55.115 *********** 2025-06-02 19:56:56.993869 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.993876 | orchestrator | 2025-06-02 19:56:56.993883 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-02 19:56:56.993890 | orchestrator | Monday 02 June 2025 19:56:52 +0000 (0:00:02.012) 0:00:57.127 *********** 2025-06-02 19:56:56.993903 | orchestrator | changed: [testbed-manager] 2025-06-02 19:56:56.993910 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:56:56.993917 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:56:56.993924 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:56:56.993931 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:56:56.993938 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:56:56.993945 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:56:56.993952 | orchestrator | 2025-06-02 19:56:56.993959 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:56:56.993967 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:56.993974 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:56.993981 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:56.993989 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:56.993996 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:56.994003 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:56.994010 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:56:56.994094 | orchestrator | 2025-06-02 19:56:56.994103 | orchestrator | 2025-06-02 19:56:56.994110 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:56:56.994117 | orchestrator | Monday 02 June 2025 19:56:55 +0000 (0:00:03.177) 0:01:00.305 *********** 2025-06-02 19:56:56.994124 | orchestrator | =============================================================================== 2025-06-02 19:56:56.994132 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 17.97s 2025-06-02 19:56:56.994139 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.09s 2025-06-02 19:56:56.994146 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.85s 2025-06-02 19:56:56.994153 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.39s 2025-06-02 19:56:56.994160 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.46s 2025-06-02 19:56:56.994167 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.18s 2025-06-02 19:56:56.994174 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.53s 2025-06-02 19:56:56.994181 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.01s 2025-06-02 19:56:56.994188 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.98s 2025-06-02 19:56:56.994195 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.87s 2025-06-02 19:56:56.994203 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.55s 2025-06-02 19:56:56.994216 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.45s 2025-06-02 19:56:56.994228 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.37s 2025-06-02 19:56:56.994235 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.24s 2025-06-02 19:56:56.994242 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.22s 2025-06-02 19:56:56.994261 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.96s 2025-06-02 19:56:56.994275 | orchestrator | 2025-06-02 19:56:56 | INFO  | Task b41c0d57-d9ea-4cb0-90dc-4a0aa7c959d7 is in state SUCCESS 2025-06-02 19:56:56.994282 | orchestrator | 2025-06-02 19:56:56 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:56:56.994320 | orchestrator | 2025-06-02 19:56:56 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:56:56.994329 | orchestrator | 2025-06-02 19:56:56 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:56:56.997136 | orchestrator | 2025-06-02 19:56:56 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:56:56.997207 | orchestrator | 2025-06-02 19:56:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:00.035979 | orchestrator | 2025-06-02 19:57:00 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:00.037339 | orchestrator | 2025-06-02 19:57:00 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:00.037765 | orchestrator | 2025-06-02 19:57:00 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:00.039511 | orchestrator | 2025-06-02 19:57:00 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:00.039554 | orchestrator | 2025-06-02 19:57:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:03.071888 | orchestrator | 2025-06-02 19:57:03 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:03.072712 | orchestrator | 2025-06-02 19:57:03 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:03.073740 | orchestrator | 2025-06-02 19:57:03 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:03.074646 | orchestrator | 2025-06-02 19:57:03 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:03.074686 | orchestrator | 2025-06-02 19:57:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:06.107322 | orchestrator | 2025-06-02 19:57:06 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:06.109679 | orchestrator | 2025-06-02 19:57:06 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:06.112746 | orchestrator | 2025-06-02 19:57:06 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:06.114795 | orchestrator | 2025-06-02 19:57:06 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:06.114808 | orchestrator | 2025-06-02 19:57:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:09.156786 | orchestrator | 2025-06-02 19:57:09 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:09.157479 | orchestrator | 2025-06-02 19:57:09 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:09.158495 | orchestrator | 2025-06-02 19:57:09 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:09.160836 | orchestrator | 2025-06-02 19:57:09 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:09.160875 | orchestrator | 2025-06-02 19:57:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:12.202375 | orchestrator | 2025-06-02 19:57:12 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:12.202826 | orchestrator | 2025-06-02 19:57:12 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:12.203804 | orchestrator | 2025-06-02 19:57:12 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:12.205063 | orchestrator | 2025-06-02 19:57:12 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:12.205264 | orchestrator | 2025-06-02 19:57:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:15.243836 | orchestrator | 2025-06-02 19:57:15 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:15.244448 | orchestrator | 2025-06-02 19:57:15 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:15.245996 | orchestrator | 2025-06-02 19:57:15 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:15.246054 | orchestrator | 2025-06-02 19:57:15 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:15.246064 | orchestrator | 2025-06-02 19:57:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:18.296745 | orchestrator | 2025-06-02 19:57:18 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:18.297767 | orchestrator | 2025-06-02 19:57:18 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:18.299861 | orchestrator | 2025-06-02 19:57:18 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:18.301083 | orchestrator | 2025-06-02 19:57:18 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:18.301759 | orchestrator | 2025-06-02 19:57:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:21.346972 | orchestrator | 2025-06-02 19:57:21 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:21.348451 | orchestrator | 2025-06-02 19:57:21 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:21.349804 | orchestrator | 2025-06-02 19:57:21 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:21.352097 | orchestrator | 2025-06-02 19:57:21 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:21.352153 | orchestrator | 2025-06-02 19:57:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:24.434223 | orchestrator | 2025-06-02 19:57:24 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:24.438889 | orchestrator | 2025-06-02 19:57:24 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:24.443281 | orchestrator | 2025-06-02 19:57:24 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:24.445741 | orchestrator | 2025-06-02 19:57:24 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:24.445784 | orchestrator | 2025-06-02 19:57:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:27.505706 | orchestrator | 2025-06-02 19:57:27 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:27.508160 | orchestrator | 2025-06-02 19:57:27 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:27.516980 | orchestrator | 2025-06-02 19:57:27 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:27.518119 | orchestrator | 2025-06-02 19:57:27 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:27.518412 | orchestrator | 2025-06-02 19:57:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:30.563549 | orchestrator | 2025-06-02 19:57:30 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:30.565956 | orchestrator | 2025-06-02 19:57:30 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:30.567194 | orchestrator | 2025-06-02 19:57:30 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:30.570716 | orchestrator | 2025-06-02 19:57:30 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:30.570779 | orchestrator | 2025-06-02 19:57:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:33.616997 | orchestrator | 2025-06-02 19:57:33 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:33.621130 | orchestrator | 2025-06-02 19:57:33 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:33.621203 | orchestrator | 2025-06-02 19:57:33 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:33.623951 | orchestrator | 2025-06-02 19:57:33 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state STARTED 2025-06-02 19:57:33.623994 | orchestrator | 2025-06-02 19:57:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:36.687548 | orchestrator | 2025-06-02 19:57:36 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:36.688747 | orchestrator | 2025-06-02 19:57:36 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:36.691666 | orchestrator | 2025-06-02 19:57:36 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:36.692785 | orchestrator | 2025-06-02 19:57:36 | INFO  | Task 2f884583-ed83-40a1-b7ab-279704e18442 is in state SUCCESS 2025-06-02 19:57:36.692835 | orchestrator | 2025-06-02 19:57:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:39.737372 | orchestrator | 2025-06-02 19:57:39 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:39.738613 | orchestrator | 2025-06-02 19:57:39 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:39.739890 | orchestrator | 2025-06-02 19:57:39 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:39.739920 | orchestrator | 2025-06-02 19:57:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:42.777058 | orchestrator | 2025-06-02 19:57:42 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:42.778343 | orchestrator | 2025-06-02 19:57:42 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:42.779755 | orchestrator | 2025-06-02 19:57:42 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:42.779907 | orchestrator | 2025-06-02 19:57:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:45.813368 | orchestrator | 2025-06-02 19:57:45 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:45.815475 | orchestrator | 2025-06-02 19:57:45 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:45.815616 | orchestrator | 2025-06-02 19:57:45 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:45.815641 | orchestrator | 2025-06-02 19:57:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:48.857719 | orchestrator | 2025-06-02 19:57:48 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:48.857864 | orchestrator | 2025-06-02 19:57:48 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:48.858356 | orchestrator | 2025-06-02 19:57:48 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:48.858409 | orchestrator | 2025-06-02 19:57:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:51.907665 | orchestrator | 2025-06-02 19:57:51 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:51.911583 | orchestrator | 2025-06-02 19:57:51 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:51.914270 | orchestrator | 2025-06-02 19:57:51 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:51.914335 | orchestrator | 2025-06-02 19:57:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:54.951389 | orchestrator | 2025-06-02 19:57:54 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:54.952731 | orchestrator | 2025-06-02 19:57:54 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:54.954716 | orchestrator | 2025-06-02 19:57:54 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:54.954803 | orchestrator | 2025-06-02 19:57:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:57:58.003791 | orchestrator | 2025-06-02 19:57:58 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:57:58.010206 | orchestrator | 2025-06-02 19:57:58 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:57:58.013707 | orchestrator | 2025-06-02 19:57:58 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:57:58.013774 | orchestrator | 2025-06-02 19:57:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:01.051170 | orchestrator | 2025-06-02 19:58:01 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:58:01.053007 | orchestrator | 2025-06-02 19:58:01 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:01.054884 | orchestrator | 2025-06-02 19:58:01 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:01.054975 | orchestrator | 2025-06-02 19:58:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:04.106452 | orchestrator | 2025-06-02 19:58:04 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:58:04.107064 | orchestrator | 2025-06-02 19:58:04 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:04.108107 | orchestrator | 2025-06-02 19:58:04 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:04.108124 | orchestrator | 2025-06-02 19:58:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:07.154651 | orchestrator | 2025-06-02 19:58:07 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:58:07.158104 | orchestrator | 2025-06-02 19:58:07 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:07.161907 | orchestrator | 2025-06-02 19:58:07 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:07.161951 | orchestrator | 2025-06-02 19:58:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:10.199740 | orchestrator | 2025-06-02 19:58:10 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:58:10.201412 | orchestrator | 2025-06-02 19:58:10 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:10.202433 | orchestrator | 2025-06-02 19:58:10 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:10.202823 | orchestrator | 2025-06-02 19:58:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:13.244244 | orchestrator | 2025-06-02 19:58:13 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state STARTED 2025-06-02 19:58:13.246445 | orchestrator | 2025-06-02 19:58:13 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:13.247546 | orchestrator | 2025-06-02 19:58:13 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:13.248141 | orchestrator | 2025-06-02 19:58:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:16.279926 | orchestrator | 2025-06-02 19:58:16 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:16.282435 | orchestrator | 2025-06-02 19:58:16 | INFO  | Task a655ac42-d1a8-4946-add2-71bac4b2c0d6 is in state SUCCESS 2025-06-02 19:58:16.284428 | orchestrator | 2025-06-02 19:58:16.284500 | orchestrator | 2025-06-02 19:58:16.284516 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-02 19:58:16.284529 | orchestrator | 2025-06-02 19:58:16.284540 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-02 19:58:16.284552 | orchestrator | Monday 02 June 2025 19:56:18 +0000 (0:00:00.229) 0:00:00.229 *********** 2025-06-02 19:58:16.284563 | orchestrator | ok: [testbed-manager] 2025-06-02 19:58:16.284575 | orchestrator | 2025-06-02 19:58:16.284586 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-02 19:58:16.284597 | orchestrator | Monday 02 June 2025 19:56:19 +0000 (0:00:01.116) 0:00:01.346 *********** 2025-06-02 19:58:16.284609 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-02 19:58:16.284619 | orchestrator | 2025-06-02 19:58:16.284630 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-02 19:58:16.284641 | orchestrator | Monday 02 June 2025 19:56:20 +0000 (0:00:01.091) 0:00:02.438 *********** 2025-06-02 19:58:16.284652 | orchestrator | changed: [testbed-manager] 2025-06-02 19:58:16.284663 | orchestrator | 2025-06-02 19:58:16.284673 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-02 19:58:16.284684 | orchestrator | Monday 02 June 2025 19:56:22 +0000 (0:00:01.631) 0:00:04.070 *********** 2025-06-02 19:58:16.284695 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-02 19:58:16.284706 | orchestrator | ok: [testbed-manager] 2025-06-02 19:58:16.284716 | orchestrator | 2025-06-02 19:58:16.284727 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-02 19:58:16.284738 | orchestrator | Monday 02 June 2025 19:57:30 +0000 (0:01:08.395) 0:01:12.465 *********** 2025-06-02 19:58:16.284749 | orchestrator | changed: [testbed-manager] 2025-06-02 19:58:16.284760 | orchestrator | 2025-06-02 19:58:16.284772 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:58:16.284783 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:16.284796 | orchestrator | 2025-06-02 19:58:16.284807 | orchestrator | 2025-06-02 19:58:16.284819 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:58:16.284832 | orchestrator | Monday 02 June 2025 19:57:34 +0000 (0:00:03.660) 0:01:16.126 *********** 2025-06-02 19:58:16.284844 | orchestrator | =============================================================================== 2025-06-02 19:58:16.284857 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 68.40s 2025-06-02 19:58:16.284870 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.66s 2025-06-02 19:58:16.284882 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.63s 2025-06-02 19:58:16.284894 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.12s 2025-06-02 19:58:16.284907 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.09s 2025-06-02 19:58:16.284940 | orchestrator | 2025-06-02 19:58:16.284953 | orchestrator | 2025-06-02 19:58:16.284965 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-02 19:58:16.284978 | orchestrator | 2025-06-02 19:58:16.284997 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 19:58:16.285010 | orchestrator | Monday 02 June 2025 19:55:49 +0000 (0:00:00.217) 0:00:00.217 *********** 2025-06-02 19:58:16.285023 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:58:16.285036 | orchestrator | 2025-06-02 19:58:16.285049 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-02 19:58:16.285061 | orchestrator | Monday 02 June 2025 19:55:50 +0000 (0:00:01.168) 0:00:01.385 *********** 2025-06-02 19:58:16.285074 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 19:58:16.285085 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 19:58:16.285096 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 19:58:16.285107 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 19:58:16.285117 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 19:58:16.285128 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 19:58:16.285138 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 19:58:16.285149 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 19:58:16.285160 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 19:58:16.285170 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 19:58:16.285181 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 19:58:16.285192 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 19:58:16.285203 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 19:58:16.285213 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 19:58:16.285224 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 19:58:16.285235 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 19:58:16.285261 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 19:58:16.285273 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 19:58:16.285283 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 19:58:16.285321 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 19:58:16.285332 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 19:58:16.285343 | orchestrator | 2025-06-02 19:58:16.285354 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 19:58:16.285364 | orchestrator | Monday 02 June 2025 19:55:55 +0000 (0:00:04.552) 0:00:05.937 *********** 2025-06-02 19:58:16.285375 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:58:16.285388 | orchestrator | 2025-06-02 19:58:16.285398 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-02 19:58:16.285409 | orchestrator | Monday 02 June 2025 19:55:56 +0000 (0:00:01.367) 0:00:07.305 *********** 2025-06-02 19:58:16.285433 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.285450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.285462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.285473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.285485 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.285616 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.285715 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.285767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285895 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285915 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.285934 | orchestrator | 2025-06-02 19:58:16.285947 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-02 19:58:16.285965 | orchestrator | Monday 02 June 2025 19:56:01 +0000 (0:00:05.257) 0:00:12.562 *********** 2025-06-02 19:58:16.285978 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286003 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286119 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286146 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:58:16.286166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286209 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:58:16.286220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286383 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:58:16.286402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286414 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:58:16.286428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286504 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:58:16.286523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286610 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:58:16.286629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286664 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:58:16.286675 | orchestrator | 2025-06-02 19:58:16.286686 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-02 19:58:16.286697 | orchestrator | Monday 02 June 2025 19:56:03 +0000 (0:00:01.727) 0:00:14.289 *********** 2025-06-02 19:58:16.286715 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286735 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286747 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286759 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:58:16.286770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286808 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:58:16.286820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286865 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:58:16.286876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.286950 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:58:16.286960 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:58:16.286971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.286990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.287006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.287018 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:58:16.287029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 19:58:16.287045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.287057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.287084 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:58:16.287111 | orchestrator | 2025-06-02 19:58:16.287133 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-02 19:58:16.287151 | orchestrator | Monday 02 June 2025 19:56:06 +0000 (0:00:02.956) 0:00:17.246 *********** 2025-06-02 19:58:16.287168 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:58:16.287185 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:58:16.287203 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:58:16.287221 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:58:16.287236 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:58:16.287252 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:58:16.287266 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:58:16.287282 | orchestrator | 2025-06-02 19:58:16.287327 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-02 19:58:16.287347 | orchestrator | Monday 02 June 2025 19:56:07 +0000 (0:00:01.011) 0:00:18.257 *********** 2025-06-02 19:58:16.287364 | orchestrator | skipping: [testbed-manager] 2025-06-02 19:58:16.287381 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:58:16.287398 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:58:16.287414 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:58:16.287432 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:58:16.287450 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:58:16.287469 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:58:16.287488 | orchestrator | 2025-06-02 19:58:16.287509 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-02 19:58:16.287528 | orchestrator | Monday 02 June 2025 19:56:08 +0000 (0:00:00.906) 0:00:19.164 *********** 2025-06-02 19:58:16.287562 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.287575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.287587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.287598 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.287639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.287668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.287691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287702 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.287737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287808 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.287864 | orchestrator | 2025-06-02 19:58:16.287875 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-02 19:58:16.287886 | orchestrator | Monday 02 June 2025 19:56:14 +0000 (0:00:06.282) 0:00:25.446 *********** 2025-06-02 19:58:16.287897 | orchestrator | [WARNING]: Skipped 2025-06-02 19:58:16.287908 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-02 19:58:16.287918 | orchestrator | to this access issue: 2025-06-02 19:58:16.287929 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-02 19:58:16.287939 | orchestrator | directory 2025-06-02 19:58:16.287955 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 19:58:16.287973 | orchestrator | 2025-06-02 19:58:16.287991 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-02 19:58:16.288008 | orchestrator | Monday 02 June 2025 19:56:16 +0000 (0:00:01.635) 0:00:27.081 *********** 2025-06-02 19:58:16.288028 | orchestrator | [WARNING]: Skipped 2025-06-02 19:58:16.288047 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-02 19:58:16.288066 | orchestrator | to this access issue: 2025-06-02 19:58:16.288079 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-02 19:58:16.288090 | orchestrator | directory 2025-06-02 19:58:16.288101 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 19:58:16.288112 | orchestrator | 2025-06-02 19:58:16.288122 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-02 19:58:16.288133 | orchestrator | Monday 02 June 2025 19:56:17 +0000 (0:00:00.971) 0:00:28.053 *********** 2025-06-02 19:58:16.288143 | orchestrator | [WARNING]: Skipped 2025-06-02 19:58:16.288160 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-02 19:58:16.288178 | orchestrator | to this access issue: 2025-06-02 19:58:16.288197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-02 19:58:16.288213 | orchestrator | directory 2025-06-02 19:58:16.288231 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 19:58:16.288249 | orchestrator | 2025-06-02 19:58:16.288277 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-02 19:58:16.288371 | orchestrator | Monday 02 June 2025 19:56:18 +0000 (0:00:00.863) 0:00:28.916 *********** 2025-06-02 19:58:16.288384 | orchestrator | [WARNING]: Skipped 2025-06-02 19:58:16.288395 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-02 19:58:16.288406 | orchestrator | to this access issue: 2025-06-02 19:58:16.288417 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-02 19:58:16.288437 | orchestrator | directory 2025-06-02 19:58:16.288447 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 19:58:16.288458 | orchestrator | 2025-06-02 19:58:16.288469 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-02 19:58:16.288480 | orchestrator | Monday 02 June 2025 19:56:18 +0000 (0:00:00.832) 0:00:29.749 *********** 2025-06-02 19:58:16.288490 | orchestrator | changed: [testbed-manager] 2025-06-02 19:58:16.288501 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:16.288512 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:16.288522 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:58:16.288533 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:58:16.288544 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:16.288554 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:58:16.288565 | orchestrator | 2025-06-02 19:58:16.288576 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-02 19:58:16.288587 | orchestrator | Monday 02 June 2025 19:56:23 +0000 (0:00:04.701) 0:00:34.450 *********** 2025-06-02 19:58:16.288598 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 19:58:16.288609 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 19:58:16.288620 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 19:58:16.288630 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 19:58:16.288641 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 19:58:16.288651 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 19:58:16.288662 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 19:58:16.288673 | orchestrator | 2025-06-02 19:58:16.288683 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-02 19:58:16.288694 | orchestrator | Monday 02 June 2025 19:56:26 +0000 (0:00:03.043) 0:00:37.494 *********** 2025-06-02 19:58:16.288705 | orchestrator | changed: [testbed-manager] 2025-06-02 19:58:16.288715 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:16.288726 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:16.288736 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:16.288747 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:58:16.288757 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:58:16.288768 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:58:16.288778 | orchestrator | 2025-06-02 19:58:16.288795 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-02 19:58:16.288806 | orchestrator | Monday 02 June 2025 19:56:29 +0000 (0:00:02.996) 0:00:40.490 *********** 2025-06-02 19:58:16.288846 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.288859 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.288877 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.288894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.288905 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.288915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.288926 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.288942 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.288953 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.288963 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.288984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.288995 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.289005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.289015 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289026 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289040 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.289050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.289066 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.289085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 19:58:16.289096 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289106 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289116 | orchestrator | 2025-06-02 19:58:16.289126 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-02 19:58:16.289136 | orchestrator | Monday 02 June 2025 19:56:32 +0000 (0:00:03.134) 0:00:43.625 *********** 2025-06-02 19:58:16.289145 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 19:58:16.289155 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 19:58:16.289164 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 19:58:16.289174 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 19:58:16.289183 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 19:58:16.289193 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 19:58:16.289202 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 19:58:16.289212 | orchestrator | 2025-06-02 19:58:16.289221 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-02 19:58:16.289231 | orchestrator | Monday 02 June 2025 19:56:36 +0000 (0:00:03.419) 0:00:47.045 *********** 2025-06-02 19:58:16.289240 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 19:58:16.289250 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 19:58:16.289266 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 19:58:16.289275 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 19:58:16.289285 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 19:58:16.289321 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 19:58:16.289332 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 19:58:16.289342 | orchestrator | 2025-06-02 19:58:16.289351 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-02 19:58:16.289361 | orchestrator | Monday 02 June 2025 19:56:38 +0000 (0:00:02.222) 0:00:49.267 *********** 2025-06-02 19:58:16.289371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.289381 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.289399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.289410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.289428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.289442 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289495 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.289505 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289515 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 19:58:16.289545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289576 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 19:58:16.289638 | orchestrator | 2025-06-02 19:58:16.289648 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-02 19:58:16.289658 | orchestrator | Monday 02 June 2025 19:56:41 +0000 (0:00:03.597) 0:00:52.865 *********** 2025-06-02 19:58:16.289667 | orchestrator | changed: [testbed-manager] 2025-06-02 19:58:16.289677 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:16.289686 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:16.289700 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:16.289710 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:58:16.289719 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:58:16.289729 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:58:16.289738 | orchestrator | 2025-06-02 19:58:16.289748 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-02 19:58:16.289757 | orchestrator | Monday 02 June 2025 19:56:43 +0000 (0:00:01.427) 0:00:54.292 *********** 2025-06-02 19:58:16.289767 | orchestrator | changed: [testbed-manager] 2025-06-02 19:58:16.289776 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:16.289786 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:16.289795 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:16.289804 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:58:16.289814 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:58:16.289824 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:58:16.289833 | orchestrator | 2025-06-02 19:58:16.289843 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 19:58:16.289852 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:00.950) 0:00:55.243 *********** 2025-06-02 19:58:16.289861 | orchestrator | 2025-06-02 19:58:16.289871 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 19:58:16.289881 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:00.156) 0:00:55.399 *********** 2025-06-02 19:58:16.289890 | orchestrator | 2025-06-02 19:58:16.289900 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 19:58:16.289909 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:00.048) 0:00:55.448 *********** 2025-06-02 19:58:16.289918 | orchestrator | 2025-06-02 19:58:16.289928 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 19:58:16.289937 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:00.049) 0:00:55.498 *********** 2025-06-02 19:58:16.289947 | orchestrator | 2025-06-02 19:58:16.289956 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 19:58:16.289966 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:00.048) 0:00:55.546 *********** 2025-06-02 19:58:16.289975 | orchestrator | 2025-06-02 19:58:16.289984 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 19:58:16.289994 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:00.048) 0:00:55.595 *********** 2025-06-02 19:58:16.290003 | orchestrator | 2025-06-02 19:58:16.290042 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 19:58:16.290055 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:00.050) 0:00:55.645 *********** 2025-06-02 19:58:16.290127 | orchestrator | 2025-06-02 19:58:16.290141 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-02 19:58:16.290150 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:00.066) 0:00:55.712 *********** 2025-06-02 19:58:16.290168 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:16.290178 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:58:16.290187 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:16.290197 | orchestrator | changed: [testbed-manager] 2025-06-02 19:58:16.290214 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:16.290224 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:58:16.290233 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:58:16.290243 | orchestrator | 2025-06-02 19:58:16.290252 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-02 19:58:16.290262 | orchestrator | Monday 02 June 2025 19:57:28 +0000 (0:00:43.540) 0:01:39.252 *********** 2025-06-02 19:58:16.290271 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:16.290281 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:58:16.290317 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:16.290334 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:16.290351 | orchestrator | changed: [testbed-manager] 2025-06-02 19:58:16.290368 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:58:16.290385 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:58:16.290395 | orchestrator | 2025-06-02 19:58:16.290405 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-02 19:58:16.290414 | orchestrator | Monday 02 June 2025 19:58:02 +0000 (0:00:34.388) 0:02:13.640 *********** 2025-06-02 19:58:16.290424 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:58:16.290433 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:58:16.290443 | orchestrator | ok: [testbed-manager] 2025-06-02 19:58:16.290452 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:58:16.290462 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:58:16.290471 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:58:16.290480 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:58:16.290490 | orchestrator | 2025-06-02 19:58:16.290500 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-02 19:58:16.290509 | orchestrator | Monday 02 June 2025 19:58:05 +0000 (0:00:02.413) 0:02:16.054 *********** 2025-06-02 19:58:16.290519 | orchestrator | changed: [testbed-manager] 2025-06-02 19:58:16.290528 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:58:16.290538 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:16.290547 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:16.290556 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:16.290566 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:58:16.290575 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:58:16.290585 | orchestrator | 2025-06-02 19:58:16.290594 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:58:16.290604 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 19:58:16.290615 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 19:58:16.290625 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 19:58:16.290634 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 19:58:16.290649 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 19:58:16.290659 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 19:58:16.290668 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 19:58:16.290678 | orchestrator | 2025-06-02 19:58:16.290688 | orchestrator | 2025-06-02 19:58:16.290697 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:58:16.290707 | orchestrator | Monday 02 June 2025 19:58:14 +0000 (0:00:09.362) 0:02:25.417 *********** 2025-06-02 19:58:16.290716 | orchestrator | =============================================================================== 2025-06-02 19:58:16.290732 | orchestrator | common : Restart fluentd container ------------------------------------- 43.54s 2025-06-02 19:58:16.290742 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.39s 2025-06-02 19:58:16.290751 | orchestrator | common : Restart cron container ----------------------------------------- 9.36s 2025-06-02 19:58:16.290761 | orchestrator | common : Copying over config.json files for services -------------------- 6.28s 2025-06-02 19:58:16.290770 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.26s 2025-06-02 19:58:16.290779 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.70s 2025-06-02 19:58:16.290789 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.55s 2025-06-02 19:58:16.290798 | orchestrator | common : Check common containers ---------------------------------------- 3.60s 2025-06-02 19:58:16.290807 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.42s 2025-06-02 19:58:16.290817 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.13s 2025-06-02 19:58:16.290826 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.04s 2025-06-02 19:58:16.290835 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.00s 2025-06-02 19:58:16.290845 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.96s 2025-06-02 19:58:16.290855 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.41s 2025-06-02 19:58:16.290869 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.22s 2025-06-02 19:58:16.290879 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.73s 2025-06-02 19:58:16.290889 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.63s 2025-06-02 19:58:16.290898 | orchestrator | common : Creating log volume -------------------------------------------- 1.43s 2025-06-02 19:58:16.290907 | orchestrator | common : include_tasks -------------------------------------------------- 1.37s 2025-06-02 19:58:16.290917 | orchestrator | common : include_tasks -------------------------------------------------- 1.17s 2025-06-02 19:58:16.290926 | orchestrator | 2025-06-02 19:58:16 | INFO  | Task 9fa5bfce-050d-4d4c-afaf-444c2f7e586f is in state STARTED 2025-06-02 19:58:16.290936 | orchestrator | 2025-06-02 19:58:16 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:16.290945 | orchestrator | 2025-06-02 19:58:16 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state STARTED 2025-06-02 19:58:16.291562 | orchestrator | 2025-06-02 19:58:16 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:16.292012 | orchestrator | 2025-06-02 19:58:16 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:16.292043 | orchestrator | 2025-06-02 19:58:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:19.337765 | orchestrator | 2025-06-02 19:58:19 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:19.337874 | orchestrator | 2025-06-02 19:58:19 | INFO  | Task 9fa5bfce-050d-4d4c-afaf-444c2f7e586f is in state STARTED 2025-06-02 19:58:19.337889 | orchestrator | 2025-06-02 19:58:19 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:19.337900 | orchestrator | 2025-06-02 19:58:19 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state STARTED 2025-06-02 19:58:19.337911 | orchestrator | 2025-06-02 19:58:19 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:19.337922 | orchestrator | 2025-06-02 19:58:19 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:19.337933 | orchestrator | 2025-06-02 19:58:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:22.360782 | orchestrator | 2025-06-02 19:58:22 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:22.361455 | orchestrator | 2025-06-02 19:58:22 | INFO  | Task 9fa5bfce-050d-4d4c-afaf-444c2f7e586f is in state STARTED 2025-06-02 19:58:22.363199 | orchestrator | 2025-06-02 19:58:22 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:22.363821 | orchestrator | 2025-06-02 19:58:22 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state STARTED 2025-06-02 19:58:22.364269 | orchestrator | 2025-06-02 19:58:22 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:22.365630 | orchestrator | 2025-06-02 19:58:22 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:22.365703 | orchestrator | 2025-06-02 19:58:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:25.398822 | orchestrator | 2025-06-02 19:58:25 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:25.399144 | orchestrator | 2025-06-02 19:58:25 | INFO  | Task 9fa5bfce-050d-4d4c-afaf-444c2f7e586f is in state STARTED 2025-06-02 19:58:25.399660 | orchestrator | 2025-06-02 19:58:25 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:25.406866 | orchestrator | 2025-06-02 19:58:25 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state STARTED 2025-06-02 19:58:25.408533 | orchestrator | 2025-06-02 19:58:25 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:25.410958 | orchestrator | 2025-06-02 19:58:25 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:25.410998 | orchestrator | 2025-06-02 19:58:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:28.460351 | orchestrator | 2025-06-02 19:58:28 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:28.461589 | orchestrator | 2025-06-02 19:58:28 | INFO  | Task 9fa5bfce-050d-4d4c-afaf-444c2f7e586f is in state STARTED 2025-06-02 19:58:28.464840 | orchestrator | 2025-06-02 19:58:28 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:28.465462 | orchestrator | 2025-06-02 19:58:28 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state STARTED 2025-06-02 19:58:28.465981 | orchestrator | 2025-06-02 19:58:28 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:28.466581 | orchestrator | 2025-06-02 19:58:28 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:28.466608 | orchestrator | 2025-06-02 19:58:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:31.488638 | orchestrator | 2025-06-02 19:58:31 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:31.489152 | orchestrator | 2025-06-02 19:58:31 | INFO  | Task 9fa5bfce-050d-4d4c-afaf-444c2f7e586f is in state STARTED 2025-06-02 19:58:31.491844 | orchestrator | 2025-06-02 19:58:31 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:31.491899 | orchestrator | 2025-06-02 19:58:31 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state STARTED 2025-06-02 19:58:31.492393 | orchestrator | 2025-06-02 19:58:31 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:31.495081 | orchestrator | 2025-06-02 19:58:31 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:31.495122 | orchestrator | 2025-06-02 19:58:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:34.521077 | orchestrator | 2025-06-02 19:58:34 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:34.522311 | orchestrator | 2025-06-02 19:58:34 | INFO  | Task 9fa5bfce-050d-4d4c-afaf-444c2f7e586f is in state STARTED 2025-06-02 19:58:34.524356 | orchestrator | 2025-06-02 19:58:34 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:34.524901 | orchestrator | 2025-06-02 19:58:34 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state STARTED 2025-06-02 19:58:34.527561 | orchestrator | 2025-06-02 19:58:34 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:34.528786 | orchestrator | 2025-06-02 19:58:34 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:34.528819 | orchestrator | 2025-06-02 19:58:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:37.570382 | orchestrator | 2025-06-02 19:58:37 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:37.570863 | orchestrator | 2025-06-02 19:58:37 | INFO  | Task 9fa5bfce-050d-4d4c-afaf-444c2f7e586f is in state STARTED 2025-06-02 19:58:37.573984 | orchestrator | 2025-06-02 19:58:37 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:37.574577 | orchestrator | 2025-06-02 19:58:37 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state STARTED 2025-06-02 19:58:37.578481 | orchestrator | 2025-06-02 19:58:37 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:37.579563 | orchestrator | 2025-06-02 19:58:37 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:37.579601 | orchestrator | 2025-06-02 19:58:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:40.617962 | orchestrator | 2025-06-02 19:58:40 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:40.618233 | orchestrator | 2025-06-02 19:58:40 | INFO  | Task 9fa5bfce-050d-4d4c-afaf-444c2f7e586f is in state SUCCESS 2025-06-02 19:58:40.621339 | orchestrator | 2025-06-02 19:58:40 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:58:40.628103 | orchestrator | 2025-06-02 19:58:40 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:40.630819 | orchestrator | 2025-06-02 19:58:40 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state STARTED 2025-06-02 19:58:40.634540 | orchestrator | 2025-06-02 19:58:40 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:40.635579 | orchestrator | 2025-06-02 19:58:40 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:40.635616 | orchestrator | 2025-06-02 19:58:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:43.659950 | orchestrator | 2025-06-02 19:58:43 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:43.660150 | orchestrator | 2025-06-02 19:58:43 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:58:43.660651 | orchestrator | 2025-06-02 19:58:43 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:43.661933 | orchestrator | 2025-06-02 19:58:43 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state STARTED 2025-06-02 19:58:43.662515 | orchestrator | 2025-06-02 19:58:43 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:43.663206 | orchestrator | 2025-06-02 19:58:43 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:43.663362 | orchestrator | 2025-06-02 19:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:46.716672 | orchestrator | 2025-06-02 19:58:46 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:46.716841 | orchestrator | 2025-06-02 19:58:46 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:58:46.717620 | orchestrator | 2025-06-02 19:58:46 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:46.719084 | orchestrator | 2025-06-02 19:58:46 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state STARTED 2025-06-02 19:58:46.719123 | orchestrator | 2025-06-02 19:58:46 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:46.721588 | orchestrator | 2025-06-02 19:58:46 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:46.721671 | orchestrator | 2025-06-02 19:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:49.749215 | orchestrator | 2025-06-02 19:58:49 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:49.749463 | orchestrator | 2025-06-02 19:58:49 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:58:49.750612 | orchestrator | 2025-06-02 19:58:49 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:49.751518 | orchestrator | 2025-06-02 19:58:49 | INFO  | Task 5c204b22-7cbe-4782-8d82-5ddaaab26c83 is in state SUCCESS 2025-06-02 19:58:49.753136 | orchestrator | 2025-06-02 19:58:49.753191 | orchestrator | 2025-06-02 19:58:49.753204 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 19:58:49.753216 | orchestrator | 2025-06-02 19:58:49.753227 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 19:58:49.753238 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:00.630) 0:00:00.630 *********** 2025-06-02 19:58:49.753248 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:58:49.753260 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:58:49.753271 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:58:49.753390 | orchestrator | 2025-06-02 19:58:49.753408 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 19:58:49.753426 | orchestrator | Monday 02 June 2025 19:58:22 +0000 (0:00:00.744) 0:00:01.374 *********** 2025-06-02 19:58:49.753455 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-02 19:58:49.753472 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-02 19:58:49.753489 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-02 19:58:49.753506 | orchestrator | 2025-06-02 19:58:49.753523 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-02 19:58:49.753539 | orchestrator | 2025-06-02 19:58:49.753554 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-02 19:58:49.753572 | orchestrator | Monday 02 June 2025 19:58:23 +0000 (0:00:00.870) 0:00:02.245 *********** 2025-06-02 19:58:49.753590 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:58:49.753608 | orchestrator | 2025-06-02 19:58:49.753625 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-02 19:58:49.753644 | orchestrator | Monday 02 June 2025 19:58:23 +0000 (0:00:00.939) 0:00:03.184 *********** 2025-06-02 19:58:49.753663 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 19:58:49.753682 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 19:58:49.753695 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 19:58:49.753706 | orchestrator | 2025-06-02 19:58:49.753716 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-02 19:58:49.753727 | orchestrator | Monday 02 June 2025 19:58:25 +0000 (0:00:01.080) 0:00:04.265 *********** 2025-06-02 19:58:49.753758 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 19:58:49.753769 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 19:58:49.753780 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 19:58:49.753790 | orchestrator | 2025-06-02 19:58:49.753801 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-02 19:58:49.753811 | orchestrator | Monday 02 June 2025 19:58:28 +0000 (0:00:03.556) 0:00:07.822 *********** 2025-06-02 19:58:49.753822 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:49.753833 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:49.753843 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:49.753854 | orchestrator | 2025-06-02 19:58:49.753864 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-02 19:58:49.753875 | orchestrator | Monday 02 June 2025 19:58:31 +0000 (0:00:02.388) 0:00:10.211 *********** 2025-06-02 19:58:49.753886 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:49.753896 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:49.753906 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:49.753917 | orchestrator | 2025-06-02 19:58:49.753928 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:58:49.753938 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:49.753952 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:49.753963 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:49.753974 | orchestrator | 2025-06-02 19:58:49.753984 | orchestrator | 2025-06-02 19:58:49.753995 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:58:49.754006 | orchestrator | Monday 02 June 2025 19:58:38 +0000 (0:00:07.702) 0:00:17.913 *********** 2025-06-02 19:58:49.754105 | orchestrator | =============================================================================== 2025-06-02 19:58:49.754121 | orchestrator | memcached : Restart memcached container --------------------------------- 7.70s 2025-06-02 19:58:49.754132 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.56s 2025-06-02 19:58:49.754142 | orchestrator | memcached : Check memcached container ----------------------------------- 2.39s 2025-06-02 19:58:49.754153 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.08s 2025-06-02 19:58:49.754164 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.94s 2025-06-02 19:58:49.754174 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2025-06-02 19:58:49.754185 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2025-06-02 19:58:49.754195 | orchestrator | 2025-06-02 19:58:49.754206 | orchestrator | 2025-06-02 19:58:49.754216 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 19:58:49.754227 | orchestrator | 2025-06-02 19:58:49.754237 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 19:58:49.754248 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:00.465) 0:00:00.465 *********** 2025-06-02 19:58:49.754258 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:58:49.754269 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:58:49.754451 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:58:49.754475 | orchestrator | 2025-06-02 19:58:49.754496 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 19:58:49.754567 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:00.294) 0:00:00.759 *********** 2025-06-02 19:58:49.754581 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-02 19:58:49.754592 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-02 19:58:49.754616 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-02 19:58:49.754626 | orchestrator | 2025-06-02 19:58:49.754637 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-02 19:58:49.754648 | orchestrator | 2025-06-02 19:58:49.754659 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-02 19:58:49.754670 | orchestrator | Monday 02 June 2025 19:58:22 +0000 (0:00:00.660) 0:00:01.420 *********** 2025-06-02 19:58:49.754688 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 19:58:49.754700 | orchestrator | 2025-06-02 19:58:49.754711 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-02 19:58:49.754722 | orchestrator | Monday 02 June 2025 19:58:23 +0000 (0:00:01.052) 0:00:02.472 *********** 2025-06-02 19:58:49.754736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754843 | orchestrator | 2025-06-02 19:58:49.754855 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-02 19:58:49.754865 | orchestrator | Monday 02 June 2025 19:58:24 +0000 (0:00:01.507) 0:00:03.979 *********** 2025-06-02 19:58:49.754875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.754950 | orchestrator | 2025-06-02 19:58:49.754965 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-02 19:58:49.754976 | orchestrator | Monday 02 June 2025 19:58:28 +0000 (0:00:03.942) 0:00:07.922 *********** 2025-06-02 19:58:49.754990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755063 | orchestrator | 2025-06-02 19:58:49.755072 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-02 19:58:49.755082 | orchestrator | Monday 02 June 2025 19:58:32 +0000 (0:00:03.351) 0:00:11.273 *********** 2025-06-02 19:58:49.755096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 19:58:49.755167 | orchestrator | 2025-06-02 19:58:49.755177 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 19:58:49.755186 | orchestrator | Monday 02 June 2025 19:58:33 +0000 (0:00:01.653) 0:00:12.927 *********** 2025-06-02 19:58:49.755196 | orchestrator | 2025-06-02 19:58:49.755206 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 19:58:49.755215 | orchestrator | Monday 02 June 2025 19:58:33 +0000 (0:00:00.109) 0:00:13.036 *********** 2025-06-02 19:58:49.755224 | orchestrator | 2025-06-02 19:58:49.755238 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 19:58:49.755248 | orchestrator | Monday 02 June 2025 19:58:33 +0000 (0:00:00.131) 0:00:13.168 *********** 2025-06-02 19:58:49.755257 | orchestrator | 2025-06-02 19:58:49.755266 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-02 19:58:49.755299 | orchestrator | Monday 02 June 2025 19:58:34 +0000 (0:00:00.132) 0:00:13.300 *********** 2025-06-02 19:58:49.755310 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:49.755320 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:49.755330 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:49.755339 | orchestrator | 2025-06-02 19:58:49.755349 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-02 19:58:49.755358 | orchestrator | Monday 02 June 2025 19:58:43 +0000 (0:00:09.637) 0:00:22.937 *********** 2025-06-02 19:58:49.755368 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:58:49.755377 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:58:49.755386 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:58:49.755395 | orchestrator | 2025-06-02 19:58:49.755405 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:58:49.755415 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:49.755425 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:49.755434 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 19:58:49.755444 | orchestrator | 2025-06-02 19:58:49.755453 | orchestrator | 2025-06-02 19:58:49.755463 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:58:49.755472 | orchestrator | Monday 02 June 2025 19:58:48 +0000 (0:00:04.521) 0:00:27.459 *********** 2025-06-02 19:58:49.755481 | orchestrator | =============================================================================== 2025-06-02 19:58:49.755490 | orchestrator | redis : Restart redis container ----------------------------------------- 9.64s 2025-06-02 19:58:49.755504 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.52s 2025-06-02 19:58:49.755529 | orchestrator | redis : Copying over default config.json files -------------------------- 3.94s 2025-06-02 19:58:49.755545 | orchestrator | redis : Copying over redis config files --------------------------------- 3.35s 2025-06-02 19:58:49.755559 | orchestrator | redis : Check redis containers ------------------------------------------ 1.65s 2025-06-02 19:58:49.755575 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.51s 2025-06-02 19:58:49.755588 | orchestrator | redis : include_tasks --------------------------------------------------- 1.05s 2025-06-02 19:58:49.755597 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-06-02 19:58:49.755607 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.37s 2025-06-02 19:58:49.755616 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-06-02 19:58:49.755625 | orchestrator | 2025-06-02 19:58:49 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:49.755635 | orchestrator | 2025-06-02 19:58:49 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:49.755645 | orchestrator | 2025-06-02 19:58:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:52.785911 | orchestrator | 2025-06-02 19:58:52 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:52.786252 | orchestrator | 2025-06-02 19:58:52 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:58:52.787141 | orchestrator | 2025-06-02 19:58:52 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:52.788663 | orchestrator | 2025-06-02 19:58:52 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:52.790428 | orchestrator | 2025-06-02 19:58:52 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:52.793265 | orchestrator | 2025-06-02 19:58:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:55.834555 | orchestrator | 2025-06-02 19:58:55 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:55.835376 | orchestrator | 2025-06-02 19:58:55 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:58:55.835753 | orchestrator | 2025-06-02 19:58:55 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:55.836925 | orchestrator | 2025-06-02 19:58:55 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:55.839039 | orchestrator | 2025-06-02 19:58:55 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:55.839122 | orchestrator | 2025-06-02 19:58:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:58:58.873770 | orchestrator | 2025-06-02 19:58:58 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:58:58.874610 | orchestrator | 2025-06-02 19:58:58 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:58:58.875844 | orchestrator | 2025-06-02 19:58:58 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:58:58.877110 | orchestrator | 2025-06-02 19:58:58 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:58:58.879154 | orchestrator | 2025-06-02 19:58:58 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:58:58.879191 | orchestrator | 2025-06-02 19:58:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:01.914259 | orchestrator | 2025-06-02 19:59:01 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:59:01.914789 | orchestrator | 2025-06-02 19:59:01 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:01.916644 | orchestrator | 2025-06-02 19:59:01 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:01.917304 | orchestrator | 2025-06-02 19:59:01 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:01.917936 | orchestrator | 2025-06-02 19:59:01 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:01.917953 | orchestrator | 2025-06-02 19:59:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:04.965025 | orchestrator | 2025-06-02 19:59:04 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:59:04.965517 | orchestrator | 2025-06-02 19:59:04 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:04.969501 | orchestrator | 2025-06-02 19:59:04 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:04.969804 | orchestrator | 2025-06-02 19:59:04 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:04.970783 | orchestrator | 2025-06-02 19:59:04 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:04.970807 | orchestrator | 2025-06-02 19:59:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:08.002118 | orchestrator | 2025-06-02 19:59:07 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:59:08.009057 | orchestrator | 2025-06-02 19:59:08 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:08.015108 | orchestrator | 2025-06-02 19:59:08 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:08.016335 | orchestrator | 2025-06-02 19:59:08 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:08.019681 | orchestrator | 2025-06-02 19:59:08 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:08.020450 | orchestrator | 2025-06-02 19:59:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:11.048743 | orchestrator | 2025-06-02 19:59:11 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:59:11.049249 | orchestrator | 2025-06-02 19:59:11 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:11.050228 | orchestrator | 2025-06-02 19:59:11 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:11.051034 | orchestrator | 2025-06-02 19:59:11 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:11.053067 | orchestrator | 2025-06-02 19:59:11 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:11.053797 | orchestrator | 2025-06-02 19:59:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:14.093485 | orchestrator | 2025-06-02 19:59:14 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:59:14.093576 | orchestrator | 2025-06-02 19:59:14 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:14.094188 | orchestrator | 2025-06-02 19:59:14 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:14.097132 | orchestrator | 2025-06-02 19:59:14 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:14.097199 | orchestrator | 2025-06-02 19:59:14 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:14.097235 | orchestrator | 2025-06-02 19:59:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:17.158683 | orchestrator | 2025-06-02 19:59:17 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:59:17.159455 | orchestrator | 2025-06-02 19:59:17 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:17.160777 | orchestrator | 2025-06-02 19:59:17 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:17.163455 | orchestrator | 2025-06-02 19:59:17 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:17.164221 | orchestrator | 2025-06-02 19:59:17 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:17.164241 | orchestrator | 2025-06-02 19:59:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:20.208798 | orchestrator | 2025-06-02 19:59:20 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:59:20.209127 | orchestrator | 2025-06-02 19:59:20 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:20.210230 | orchestrator | 2025-06-02 19:59:20 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:20.210926 | orchestrator | 2025-06-02 19:59:20 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:20.211851 | orchestrator | 2025-06-02 19:59:20 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:20.211886 | orchestrator | 2025-06-02 19:59:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:23.260138 | orchestrator | 2025-06-02 19:59:23 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state STARTED 2025-06-02 19:59:23.262740 | orchestrator | 2025-06-02 19:59:23 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:23.262807 | orchestrator | 2025-06-02 19:59:23 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:23.263216 | orchestrator | 2025-06-02 19:59:23 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:23.264734 | orchestrator | 2025-06-02 19:59:23 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:23.264770 | orchestrator | 2025-06-02 19:59:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:26.307889 | orchestrator | 2025-06-02 19:59:26 | INFO  | Task e777a0ca-8b96-4a99-87fd-14201763cfd3 is in state SUCCESS 2025-06-02 19:59:26.308720 | orchestrator | 2025-06-02 19:59:26.308748 | orchestrator | 2025-06-02 19:59:26.308754 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 19:59:26.308759 | orchestrator | 2025-06-02 19:59:26.308763 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 19:59:26.308768 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:00.447) 0:00:00.447 *********** 2025-06-02 19:59:26.308773 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:59:26.308780 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:59:26.308784 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:59:26.308788 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:59:26.308793 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:59:26.308797 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:59:26.308801 | orchestrator | 2025-06-02 19:59:26.308806 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 19:59:26.308810 | orchestrator | Monday 02 June 2025 19:58:22 +0000 (0:00:01.104) 0:00:01.551 *********** 2025-06-02 19:59:26.308835 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 19:59:26.308840 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 19:59:26.308864 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 19:59:26.308869 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 19:59:26.308873 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 19:59:26.308876 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 19:59:26.308880 | orchestrator | 2025-06-02 19:59:26.308884 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-02 19:59:26.308888 | orchestrator | 2025-06-02 19:59:26.308891 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-02 19:59:26.308895 | orchestrator | Monday 02 June 2025 19:58:23 +0000 (0:00:00.843) 0:00:02.394 *********** 2025-06-02 19:59:26.308900 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 19:59:26.308905 | orchestrator | 2025-06-02 19:59:26.308908 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 19:59:26.308912 | orchestrator | Monday 02 June 2025 19:58:25 +0000 (0:00:02.368) 0:00:04.763 *********** 2025-06-02 19:59:26.308916 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 19:59:26.308920 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 19:59:26.308924 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 19:59:26.308928 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 19:59:26.308942 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 19:59:26.308946 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 19:59:26.308949 | orchestrator | 2025-06-02 19:59:26.308953 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 19:59:26.308957 | orchestrator | Monday 02 June 2025 19:58:27 +0000 (0:00:02.003) 0:00:06.766 *********** 2025-06-02 19:59:26.308960 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 19:59:26.308964 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 19:59:26.308968 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 19:59:26.308971 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 19:59:26.308975 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 19:59:26.308979 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 19:59:26.308982 | orchestrator | 2025-06-02 19:59:26.308986 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 19:59:26.308990 | orchestrator | Monday 02 June 2025 19:58:29 +0000 (0:00:02.341) 0:00:09.108 *********** 2025-06-02 19:59:26.308994 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-02 19:59:26.308997 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:59:26.309002 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-02 19:59:26.309006 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:59:26.309009 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-02 19:59:26.309013 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:59:26.309016 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-02 19:59:26.309020 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:59:26.309024 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-02 19:59:26.309027 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:59:26.309031 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-02 19:59:26.309035 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:59:26.309038 | orchestrator | 2025-06-02 19:59:26.309042 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-02 19:59:26.309046 | orchestrator | Monday 02 June 2025 19:58:31 +0000 (0:00:01.361) 0:00:10.469 *********** 2025-06-02 19:59:26.309063 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:59:26.309067 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:59:26.309071 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:59:26.309075 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:59:26.309078 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:59:26.309082 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:59:26.309085 | orchestrator | 2025-06-02 19:59:26.309089 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-02 19:59:26.309093 | orchestrator | Monday 02 June 2025 19:58:32 +0000 (0:00:01.036) 0:00:11.505 *********** 2025-06-02 19:59:26.309109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309130 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309149 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309188 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309193 | orchestrator | 2025-06-02 19:59:26.309197 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-02 19:59:26.309201 | orchestrator | Monday 02 June 2025 19:58:33 +0000 (0:00:01.591) 0:00:13.097 *********** 2025-06-02 19:59:26.309205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309249 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309287 | orchestrator | 2025-06-02 19:59:26.309291 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-02 19:59:26.309295 | orchestrator | Monday 02 June 2025 19:58:37 +0000 (0:00:03.840) 0:00:16.937 *********** 2025-06-02 19:59:26.309299 | orchestrator | skipping: [testbed-node-0] 2025-06-02 19:59:26.309303 | orchestrator | skipping: [testbed-node-1] 2025-06-02 19:59:26.309306 | orchestrator | skipping: [testbed-node-2] 2025-06-02 19:59:26.309310 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:59:26.309314 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:59:26.309317 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:59:26.309321 | orchestrator | 2025-06-02 19:59:26.309325 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-02 19:59:26.309328 | orchestrator | Monday 02 June 2025 19:58:38 +0000 (0:00:01.127) 0:00:18.064 *********** 2025-06-02 19:59:26.309332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309359 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309385 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 19:59:26.309410 | orchestrator | 2025-06-02 19:59:26.309414 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 19:59:26.309418 | orchestrator | Monday 02 June 2025 19:58:41 +0000 (0:00:02.966) 0:00:21.030 *********** 2025-06-02 19:59:26.309422 | orchestrator | 2025-06-02 19:59:26.309428 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 19:59:26.309432 | orchestrator | Monday 02 June 2025 19:58:41 +0000 (0:00:00.140) 0:00:21.171 *********** 2025-06-02 19:59:26.309435 | orchestrator | 2025-06-02 19:59:26.309439 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 19:59:26.309443 | orchestrator | Monday 02 June 2025 19:58:42 +0000 (0:00:00.101) 0:00:21.272 *********** 2025-06-02 19:59:26.309446 | orchestrator | 2025-06-02 19:59:26.309450 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 19:59:26.309453 | orchestrator | Monday 02 June 2025 19:58:42 +0000 (0:00:00.105) 0:00:21.378 *********** 2025-06-02 19:59:26.309457 | orchestrator | 2025-06-02 19:59:26.309461 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 19:59:26.309464 | orchestrator | Monday 02 June 2025 19:58:42 +0000 (0:00:00.143) 0:00:21.522 *********** 2025-06-02 19:59:26.309468 | orchestrator | 2025-06-02 19:59:26.309472 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 19:59:26.309475 | orchestrator | Monday 02 June 2025 19:58:42 +0000 (0:00:00.220) 0:00:21.742 *********** 2025-06-02 19:59:26.309479 | orchestrator | 2025-06-02 19:59:26.309483 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-02 19:59:26.309486 | orchestrator | Monday 02 June 2025 19:58:42 +0000 (0:00:00.504) 0:00:22.247 *********** 2025-06-02 19:59:26.309490 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:59:26.309494 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:59:26.309497 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:59:26.309501 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:59:26.309505 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:59:26.309508 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:59:26.309512 | orchestrator | 2025-06-02 19:59:26.309516 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-02 19:59:26.309519 | orchestrator | Monday 02 June 2025 19:58:49 +0000 (0:00:06.640) 0:00:28.887 *********** 2025-06-02 19:59:26.309523 | orchestrator | ok: [testbed-node-2] 2025-06-02 19:59:26.309527 | orchestrator | ok: [testbed-node-0] 2025-06-02 19:59:26.309530 | orchestrator | ok: [testbed-node-1] 2025-06-02 19:59:26.309534 | orchestrator | ok: [testbed-node-3] 2025-06-02 19:59:26.309538 | orchestrator | ok: [testbed-node-4] 2025-06-02 19:59:26.309541 | orchestrator | ok: [testbed-node-5] 2025-06-02 19:59:26.309545 | orchestrator | 2025-06-02 19:59:26.309549 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 19:59:26.309553 | orchestrator | Monday 02 June 2025 19:58:51 +0000 (0:00:01.752) 0:00:30.639 *********** 2025-06-02 19:59:26.309556 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:59:26.309560 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:59:26.309563 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:59:26.309567 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:59:26.309571 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:59:26.309574 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:59:26.309578 | orchestrator | 2025-06-02 19:59:26.309582 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-02 19:59:26.309585 | orchestrator | Monday 02 June 2025 19:59:00 +0000 (0:00:08.923) 0:00:39.563 *********** 2025-06-02 19:59:26.309653 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-02 19:59:26.309659 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-02 19:59:26.309663 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-02 19:59:26.309671 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-02 19:59:26.309674 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-02 19:59:26.309678 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-02 19:59:26.309682 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-02 19:59:26.309685 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-02 19:59:26.309689 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-02 19:59:26.309693 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-02 19:59:26.309696 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-02 19:59:26.309700 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-02 19:59:26.309703 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 19:59:26.309707 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 19:59:26.309711 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 19:59:26.309714 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 19:59:26.309721 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 19:59:26.309724 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 19:59:26.309728 | orchestrator | 2025-06-02 19:59:26.309732 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-02 19:59:26.309735 | orchestrator | Monday 02 June 2025 19:59:08 +0000 (0:00:08.685) 0:00:48.249 *********** 2025-06-02 19:59:26.309739 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-02 19:59:26.309743 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:59:26.309747 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-02 19:59:26.309750 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:59:26.309754 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-02 19:59:26.309758 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:59:26.309761 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-02 19:59:26.309765 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-02 19:59:26.309769 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-02 19:59:26.309772 | orchestrator | 2025-06-02 19:59:26.309776 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-02 19:59:26.309780 | orchestrator | Monday 02 June 2025 19:59:11 +0000 (0:00:02.218) 0:00:50.467 *********** 2025-06-02 19:59:26.309783 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-02 19:59:26.309787 | orchestrator | skipping: [testbed-node-3] 2025-06-02 19:59:26.309791 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-02 19:59:26.309794 | orchestrator | skipping: [testbed-node-4] 2025-06-02 19:59:26.309798 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-02 19:59:26.309802 | orchestrator | skipping: [testbed-node-5] 2025-06-02 19:59:26.309805 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-02 19:59:26.309813 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-02 19:59:26.309816 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-02 19:59:26.309820 | orchestrator | 2025-06-02 19:59:26.309824 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 19:59:26.309827 | orchestrator | Monday 02 June 2025 19:59:15 +0000 (0:00:03.926) 0:00:54.394 *********** 2025-06-02 19:59:26.309831 | orchestrator | changed: [testbed-node-1] 2025-06-02 19:59:26.309835 | orchestrator | changed: [testbed-node-2] 2025-06-02 19:59:26.309838 | orchestrator | changed: [testbed-node-0] 2025-06-02 19:59:26.309842 | orchestrator | changed: [testbed-node-4] 2025-06-02 19:59:26.309846 | orchestrator | changed: [testbed-node-3] 2025-06-02 19:59:26.309849 | orchestrator | changed: [testbed-node-5] 2025-06-02 19:59:26.309853 | orchestrator | 2025-06-02 19:59:26.309858 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 19:59:26.309865 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 19:59:26.309873 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 19:59:26.309879 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 19:59:26.309885 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:59:26.309891 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:59:26.309897 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 19:59:26.309902 | orchestrator | 2025-06-02 19:59:26.309908 | orchestrator | 2025-06-02 19:59:26.309914 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 19:59:26.309921 | orchestrator | Monday 02 June 2025 19:59:23 +0000 (0:00:08.414) 0:01:02.808 *********** 2025-06-02 19:59:26.309927 | orchestrator | =============================================================================== 2025-06-02 19:59:26.309932 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.34s 2025-06-02 19:59:26.309939 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.69s 2025-06-02 19:59:26.309945 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 6.64s 2025-06-02 19:59:26.309951 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.93s 2025-06-02 19:59:26.309957 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.84s 2025-06-02 19:59:26.309961 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.97s 2025-06-02 19:59:26.309977 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.37s 2025-06-02 19:59:26.309984 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.34s 2025-06-02 19:59:26.309990 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.22s 2025-06-02 19:59:26.309996 | orchestrator | module-load : Load modules ---------------------------------------------- 2.00s 2025-06-02 19:59:26.310002 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.75s 2025-06-02 19:59:26.310008 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.59s 2025-06-02 19:59:26.310054 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.36s 2025-06-02 19:59:26.310060 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.22s 2025-06-02 19:59:26.310064 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.13s 2025-06-02 19:59:26.310072 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.10s 2025-06-02 19:59:26.310076 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.04s 2025-06-02 19:59:26.310080 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-06-02 19:59:26.310087 | orchestrator | 2025-06-02 19:59:26 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:26.312664 | orchestrator | 2025-06-02 19:59:26 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:26.314608 | orchestrator | 2025-06-02 19:59:26 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:26.319281 | orchestrator | 2025-06-02 19:59:26 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:26.321250 | orchestrator | 2025-06-02 19:59:26 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:26.321382 | orchestrator | 2025-06-02 19:59:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:29.360952 | orchestrator | 2025-06-02 19:59:29 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:29.362987 | orchestrator | 2025-06-02 19:59:29 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:29.366498 | orchestrator | 2025-06-02 19:59:29 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:29.368556 | orchestrator | 2025-06-02 19:59:29 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:29.370829 | orchestrator | 2025-06-02 19:59:29 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:29.370868 | orchestrator | 2025-06-02 19:59:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:32.406002 | orchestrator | 2025-06-02 19:59:32 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:32.408966 | orchestrator | 2025-06-02 19:59:32 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:32.409011 | orchestrator | 2025-06-02 19:59:32 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:32.409468 | orchestrator | 2025-06-02 19:59:32 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:32.410239 | orchestrator | 2025-06-02 19:59:32 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:32.410352 | orchestrator | 2025-06-02 19:59:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:35.448888 | orchestrator | 2025-06-02 19:59:35 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:35.449855 | orchestrator | 2025-06-02 19:59:35 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:35.451076 | orchestrator | 2025-06-02 19:59:35 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:35.452129 | orchestrator | 2025-06-02 19:59:35 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:35.454210 | orchestrator | 2025-06-02 19:59:35 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:35.454258 | orchestrator | 2025-06-02 19:59:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:38.494805 | orchestrator | 2025-06-02 19:59:38 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:38.494938 | orchestrator | 2025-06-02 19:59:38 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:38.495403 | orchestrator | 2025-06-02 19:59:38 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:38.496152 | orchestrator | 2025-06-02 19:59:38 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:38.497016 | orchestrator | 2025-06-02 19:59:38 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:38.497119 | orchestrator | 2025-06-02 19:59:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:41.538838 | orchestrator | 2025-06-02 19:59:41 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:41.540589 | orchestrator | 2025-06-02 19:59:41 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:41.542821 | orchestrator | 2025-06-02 19:59:41 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:41.544202 | orchestrator | 2025-06-02 19:59:41 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:41.546364 | orchestrator | 2025-06-02 19:59:41 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:41.546406 | orchestrator | 2025-06-02 19:59:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:44.581891 | orchestrator | 2025-06-02 19:59:44 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:44.582260 | orchestrator | 2025-06-02 19:59:44 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:44.583579 | orchestrator | 2025-06-02 19:59:44 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:44.584742 | orchestrator | 2025-06-02 19:59:44 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:44.587191 | orchestrator | 2025-06-02 19:59:44 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:44.587464 | orchestrator | 2025-06-02 19:59:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:47.631183 | orchestrator | 2025-06-02 19:59:47 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:47.632861 | orchestrator | 2025-06-02 19:59:47 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:47.634789 | orchestrator | 2025-06-02 19:59:47 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:47.637996 | orchestrator | 2025-06-02 19:59:47 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:47.639635 | orchestrator | 2025-06-02 19:59:47 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:47.640592 | orchestrator | 2025-06-02 19:59:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:50.694700 | orchestrator | 2025-06-02 19:59:50 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:50.695815 | orchestrator | 2025-06-02 19:59:50 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:50.697896 | orchestrator | 2025-06-02 19:59:50 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:50.699708 | orchestrator | 2025-06-02 19:59:50 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:50.701400 | orchestrator | 2025-06-02 19:59:50 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:50.701468 | orchestrator | 2025-06-02 19:59:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:53.744982 | orchestrator | 2025-06-02 19:59:53 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:53.745771 | orchestrator | 2025-06-02 19:59:53 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:53.747881 | orchestrator | 2025-06-02 19:59:53 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:53.751117 | orchestrator | 2025-06-02 19:59:53 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:53.751173 | orchestrator | 2025-06-02 19:59:53 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:53.751183 | orchestrator | 2025-06-02 19:59:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:56.790162 | orchestrator | 2025-06-02 19:59:56 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:56.790869 | orchestrator | 2025-06-02 19:59:56 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:56.793008 | orchestrator | 2025-06-02 19:59:56 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:56.795134 | orchestrator | 2025-06-02 19:59:56 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:56.796356 | orchestrator | 2025-06-02 19:59:56 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:56.796401 | orchestrator | 2025-06-02 19:59:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 19:59:59.844354 | orchestrator | 2025-06-02 19:59:59 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 19:59:59.846931 | orchestrator | 2025-06-02 19:59:59 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 19:59:59.849544 | orchestrator | 2025-06-02 19:59:59 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 19:59:59.851770 | orchestrator | 2025-06-02 19:59:59 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 19:59:59.853807 | orchestrator | 2025-06-02 19:59:59 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 19:59:59.854115 | orchestrator | 2025-06-02 19:59:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:02.903466 | orchestrator | 2025-06-02 20:00:02 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:02.904649 | orchestrator | 2025-06-02 20:00:02 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:02.905909 | orchestrator | 2025-06-02 20:00:02 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:02.907544 | orchestrator | 2025-06-02 20:00:02 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:02.909028 | orchestrator | 2025-06-02 20:00:02 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:02.909073 | orchestrator | 2025-06-02 20:00:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:05.950722 | orchestrator | 2025-06-02 20:00:05 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:05.952785 | orchestrator | 2025-06-02 20:00:05 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:05.953655 | orchestrator | 2025-06-02 20:00:05 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:05.955230 | orchestrator | 2025-06-02 20:00:05 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:05.956242 | orchestrator | 2025-06-02 20:00:05 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:05.956329 | orchestrator | 2025-06-02 20:00:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:09.002238 | orchestrator | 2025-06-02 20:00:08 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:09.006643 | orchestrator | 2025-06-02 20:00:09 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:09.009947 | orchestrator | 2025-06-02 20:00:09 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:09.013841 | orchestrator | 2025-06-02 20:00:09 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:09.019381 | orchestrator | 2025-06-02 20:00:09 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:09.019495 | orchestrator | 2025-06-02 20:00:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:12.066421 | orchestrator | 2025-06-02 20:00:12 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:12.068573 | orchestrator | 2025-06-02 20:00:12 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:12.072551 | orchestrator | 2025-06-02 20:00:12 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:12.074310 | orchestrator | 2025-06-02 20:00:12 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:12.076116 | orchestrator | 2025-06-02 20:00:12 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:12.076329 | orchestrator | 2025-06-02 20:00:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:15.137632 | orchestrator | 2025-06-02 20:00:15 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:15.137796 | orchestrator | 2025-06-02 20:00:15 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:15.141014 | orchestrator | 2025-06-02 20:00:15 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:15.147248 | orchestrator | 2025-06-02 20:00:15 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:15.148913 | orchestrator | 2025-06-02 20:00:15 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:15.149641 | orchestrator | 2025-06-02 20:00:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:18.199873 | orchestrator | 2025-06-02 20:00:18 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:18.202534 | orchestrator | 2025-06-02 20:00:18 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:18.205324 | orchestrator | 2025-06-02 20:00:18 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:18.208050 | orchestrator | 2025-06-02 20:00:18 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:18.210598 | orchestrator | 2025-06-02 20:00:18 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:18.210639 | orchestrator | 2025-06-02 20:00:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:21.248672 | orchestrator | 2025-06-02 20:00:21 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:21.248854 | orchestrator | 2025-06-02 20:00:21 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:21.249240 | orchestrator | 2025-06-02 20:00:21 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:21.249983 | orchestrator | 2025-06-02 20:00:21 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:21.250766 | orchestrator | 2025-06-02 20:00:21 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:21.251329 | orchestrator | 2025-06-02 20:00:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:24.300781 | orchestrator | 2025-06-02 20:00:24 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:24.302480 | orchestrator | 2025-06-02 20:00:24 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:24.303832 | orchestrator | 2025-06-02 20:00:24 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:24.305227 | orchestrator | 2025-06-02 20:00:24 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:24.306906 | orchestrator | 2025-06-02 20:00:24 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:24.306954 | orchestrator | 2025-06-02 20:00:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:27.354633 | orchestrator | 2025-06-02 20:00:27 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:27.354805 | orchestrator | 2025-06-02 20:00:27 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:27.355847 | orchestrator | 2025-06-02 20:00:27 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:27.358287 | orchestrator | 2025-06-02 20:00:27 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:27.358323 | orchestrator | 2025-06-02 20:00:27 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:27.358329 | orchestrator | 2025-06-02 20:00:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:30.418135 | orchestrator | 2025-06-02 20:00:30 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:30.424792 | orchestrator | 2025-06-02 20:00:30 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:30.431610 | orchestrator | 2025-06-02 20:00:30 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:30.441574 | orchestrator | 2025-06-02 20:00:30 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:30.441684 | orchestrator | 2025-06-02 20:00:30 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:30.441695 | orchestrator | 2025-06-02 20:00:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:33.480638 | orchestrator | 2025-06-02 20:00:33 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:33.480732 | orchestrator | 2025-06-02 20:00:33 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:33.482743 | orchestrator | 2025-06-02 20:00:33 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:33.483881 | orchestrator | 2025-06-02 20:00:33 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:33.485282 | orchestrator | 2025-06-02 20:00:33 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:33.486120 | orchestrator | 2025-06-02 20:00:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:36.531016 | orchestrator | 2025-06-02 20:00:36 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:36.531157 | orchestrator | 2025-06-02 20:00:36 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:36.535581 | orchestrator | 2025-06-02 20:00:36 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:36.537623 | orchestrator | 2025-06-02 20:00:36 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:36.538429 | orchestrator | 2025-06-02 20:00:36 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:36.538453 | orchestrator | 2025-06-02 20:00:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:39.576499 | orchestrator | 2025-06-02 20:00:39 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:39.578423 | orchestrator | 2025-06-02 20:00:39 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:39.578516 | orchestrator | 2025-06-02 20:00:39 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:39.578632 | orchestrator | 2025-06-02 20:00:39 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:39.579387 | orchestrator | 2025-06-02 20:00:39 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:39.579449 | orchestrator | 2025-06-02 20:00:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:42.615778 | orchestrator | 2025-06-02 20:00:42 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:42.615966 | orchestrator | 2025-06-02 20:00:42 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:42.616593 | orchestrator | 2025-06-02 20:00:42 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:42.616937 | orchestrator | 2025-06-02 20:00:42 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:42.623774 | orchestrator | 2025-06-02 20:00:42 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:42.623849 | orchestrator | 2025-06-02 20:00:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:45.645593 | orchestrator | 2025-06-02 20:00:45 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:45.647052 | orchestrator | 2025-06-02 20:00:45 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:45.647442 | orchestrator | 2025-06-02 20:00:45 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state STARTED 2025-06-02 20:00:45.649287 | orchestrator | 2025-06-02 20:00:45 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:45.650011 | orchestrator | 2025-06-02 20:00:45 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:45.650870 | orchestrator | 2025-06-02 20:00:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:48.677863 | orchestrator | 2025-06-02 20:00:48 | INFO  | Task bfa6412a-e0af-41ef-8074-aea7a9bc6751 is in state STARTED 2025-06-02 20:00:48.678142 | orchestrator | 2025-06-02 20:00:48 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:48.678631 | orchestrator | 2025-06-02 20:00:48 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:48.679953 | orchestrator | 2025-06-02 20:00:48 | INFO  | Task 79e4a91d-9f91-4c53-92e4-cb6c7156cd28 is in state SUCCESS 2025-06-02 20:00:48.681811 | orchestrator | 2025-06-02 20:00:48.681875 | orchestrator | 2025-06-02 20:00:48.681893 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-02 20:00:48.681933 | orchestrator | 2025-06-02 20:00:48.681943 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-02 20:00:48.681952 | orchestrator | Monday 02 June 2025 19:55:49 +0000 (0:00:00.202) 0:00:00.202 *********** 2025-06-02 20:00:48.681960 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:00:48.681969 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:00:48.681977 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:00:48.681985 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.681993 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.682000 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.682008 | orchestrator | 2025-06-02 20:00:48.682086 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-02 20:00:48.682098 | orchestrator | Monday 02 June 2025 19:55:50 +0000 (0:00:00.846) 0:00:01.048 *********** 2025-06-02 20:00:48.682106 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.682115 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.682143 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.682159 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.682168 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.682176 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.682183 | orchestrator | 2025-06-02 20:00:48.682191 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-02 20:00:48.682199 | orchestrator | Monday 02 June 2025 19:55:51 +0000 (0:00:00.757) 0:00:01.805 *********** 2025-06-02 20:00:48.682207 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.682214 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.682222 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.682229 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.682237 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.682273 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.682282 | orchestrator | 2025-06-02 20:00:48.682290 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-02 20:00:48.682298 | orchestrator | Monday 02 June 2025 19:55:52 +0000 (0:00:00.973) 0:00:02.779 *********** 2025-06-02 20:00:48.682305 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:48.682313 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:48.682321 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:48.682328 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.682336 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.682344 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.682351 | orchestrator | 2025-06-02 20:00:48.682359 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-02 20:00:48.682367 | orchestrator | Monday 02 June 2025 19:55:55 +0000 (0:00:02.942) 0:00:05.721 *********** 2025-06-02 20:00:48.682375 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:48.682382 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:48.682392 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:48.682401 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.682410 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.682419 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.682428 | orchestrator | 2025-06-02 20:00:48.682437 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-02 20:00:48.682446 | orchestrator | Monday 02 June 2025 19:55:56 +0000 (0:00:01.225) 0:00:06.946 *********** 2025-06-02 20:00:48.682455 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:48.682464 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:48.682473 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:48.682482 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.682491 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.682499 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.682508 | orchestrator | 2025-06-02 20:00:48.682517 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-02 20:00:48.682526 | orchestrator | Monday 02 June 2025 19:55:57 +0000 (0:00:01.254) 0:00:08.200 *********** 2025-06-02 20:00:48.682583 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.682599 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.682610 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.682622 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.682634 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.682645 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.682657 | orchestrator | 2025-06-02 20:00:48.682669 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-02 20:00:48.682681 | orchestrator | Monday 02 June 2025 19:55:58 +0000 (0:00:00.861) 0:00:09.062 *********** 2025-06-02 20:00:48.682692 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.682704 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.682717 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.682728 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.682740 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.682752 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.682763 | orchestrator | 2025-06-02 20:00:48.682774 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-02 20:00:48.682785 | orchestrator | Monday 02 June 2025 19:55:59 +0000 (0:00:00.683) 0:00:09.746 *********** 2025-06-02 20:00:48.682798 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:00:48.682810 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:00:48.682821 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.682833 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:00:48.682845 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:00:48.682857 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.682869 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:00:48.682882 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:00:48.682895 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.682908 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:00:48.682942 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:00:48.682951 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.682959 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:00:48.682984 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:00:48.682993 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.683000 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:00:48.683008 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:00:48.683016 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.683024 | orchestrator | 2025-06-02 20:00:48.683032 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-02 20:00:48.683040 | orchestrator | Monday 02 June 2025 19:56:00 +0000 (0:00:00.931) 0:00:10.677 *********** 2025-06-02 20:00:48.683048 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.683055 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.683071 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.683078 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.683086 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.683094 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.683102 | orchestrator | 2025-06-02 20:00:48.683110 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-02 20:00:48.683119 | orchestrator | Monday 02 June 2025 19:56:01 +0000 (0:00:01.055) 0:00:11.733 *********** 2025-06-02 20:00:48.683127 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:00:48.683143 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:00:48.683150 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:00:48.683158 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.683166 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.683173 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.683181 | orchestrator | 2025-06-02 20:00:48.683188 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-02 20:00:48.683196 | orchestrator | Monday 02 June 2025 19:56:02 +0000 (0:00:00.721) 0:00:12.454 *********** 2025-06-02 20:00:48.683204 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:48.683211 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:48.683220 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.683233 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.683271 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:48.683283 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.683295 | orchestrator | 2025-06-02 20:00:48.683306 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-02 20:00:48.683318 | orchestrator | Monday 02 June 2025 19:56:11 +0000 (0:00:09.553) 0:00:22.008 *********** 2025-06-02 20:00:48.683330 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.683342 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.683356 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.683369 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.683378 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.683385 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.683393 | orchestrator | 2025-06-02 20:00:48.683401 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-02 20:00:48.683409 | orchestrator | Monday 02 June 2025 19:56:12 +0000 (0:00:01.065) 0:00:23.073 *********** 2025-06-02 20:00:48.683416 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.683424 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.683431 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.683439 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.683447 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.683454 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.683462 | orchestrator | 2025-06-02 20:00:48.683470 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-02 20:00:48.683479 | orchestrator | Monday 02 June 2025 19:56:14 +0000 (0:00:01.727) 0:00:24.800 *********** 2025-06-02 20:00:48.683486 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.683494 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.683502 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.683509 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.683517 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.683524 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.683532 | orchestrator | 2025-06-02 20:00:48.683540 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-02 20:00:48.683547 | orchestrator | Monday 02 June 2025 19:56:15 +0000 (0:00:00.780) 0:00:25.581 *********** 2025-06-02 20:00:48.683555 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-02 20:00:48.683564 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-02 20:00:48.683571 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.683579 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-02 20:00:48.683587 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-02 20:00:48.683595 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.683602 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-02 20:00:48.683610 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-02 20:00:48.683618 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.683625 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-02 20:00:48.683633 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-02 20:00:48.683648 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.683656 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-02 20:00:48.683664 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-02 20:00:48.683671 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.683679 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-02 20:00:48.683687 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-02 20:00:48.683695 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.683702 | orchestrator | 2025-06-02 20:00:48.683710 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-02 20:00:48.683724 | orchestrator | Monday 02 June 2025 19:56:16 +0000 (0:00:00.915) 0:00:26.497 *********** 2025-06-02 20:00:48.683733 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.683740 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.683748 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.683755 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.683763 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.683771 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.683778 | orchestrator | 2025-06-02 20:00:48.683786 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-02 20:00:48.683794 | orchestrator | 2025-06-02 20:00:48.683801 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-02 20:00:48.683809 | orchestrator | Monday 02 June 2025 19:56:17 +0000 (0:00:01.267) 0:00:27.765 *********** 2025-06-02 20:00:48.683817 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.683825 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.683833 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.683841 | orchestrator | 2025-06-02 20:00:48.683848 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-02 20:00:48.683857 | orchestrator | Monday 02 June 2025 19:56:18 +0000 (0:00:01.004) 0:00:28.769 *********** 2025-06-02 20:00:48.683864 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.683872 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.683880 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.683909 | orchestrator | 2025-06-02 20:00:48.683918 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-02 20:00:48.683926 | orchestrator | Monday 02 June 2025 19:56:19 +0000 (0:00:01.009) 0:00:29.778 *********** 2025-06-02 20:00:48.683933 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.683941 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.683949 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.683956 | orchestrator | 2025-06-02 20:00:48.683964 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-02 20:00:48.683972 | orchestrator | Monday 02 June 2025 19:56:20 +0000 (0:00:01.281) 0:00:31.060 *********** 2025-06-02 20:00:48.683980 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.683988 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.683995 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.684003 | orchestrator | 2025-06-02 20:00:48.684011 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-02 20:00:48.684019 | orchestrator | Monday 02 June 2025 19:56:21 +0000 (0:00:00.906) 0:00:31.966 *********** 2025-06-02 20:00:48.684027 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.684034 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.684042 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.684050 | orchestrator | 2025-06-02 20:00:48.684057 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-02 20:00:48.684065 | orchestrator | Monday 02 June 2025 19:56:22 +0000 (0:00:00.580) 0:00:32.547 *********** 2025-06-02 20:00:48.684073 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:00:48.684080 | orchestrator | 2025-06-02 20:00:48.684088 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-02 20:00:48.684102 | orchestrator | Monday 02 June 2025 19:56:22 +0000 (0:00:00.700) 0:00:33.248 *********** 2025-06-02 20:00:48.684109 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.684117 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.684125 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.684132 | orchestrator | 2025-06-02 20:00:48.684140 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-02 20:00:48.684148 | orchestrator | Monday 02 June 2025 19:56:25 +0000 (0:00:02.849) 0:00:36.097 *********** 2025-06-02 20:00:48.684156 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.684163 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.684171 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.684178 | orchestrator | 2025-06-02 20:00:48.684186 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-02 20:00:48.684194 | orchestrator | Monday 02 June 2025 19:56:26 +0000 (0:00:00.901) 0:00:36.999 *********** 2025-06-02 20:00:48.684201 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.684209 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.684217 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.684224 | orchestrator | 2025-06-02 20:00:48.684232 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-02 20:00:48.684240 | orchestrator | Monday 02 June 2025 19:56:27 +0000 (0:00:01.314) 0:00:38.313 *********** 2025-06-02 20:00:48.684327 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.684336 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.684344 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.684352 | orchestrator | 2025-06-02 20:00:48.684360 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-02 20:00:48.684368 | orchestrator | Monday 02 June 2025 19:56:29 +0000 (0:00:01.984) 0:00:40.297 *********** 2025-06-02 20:00:48.684376 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.684384 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.684391 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.684399 | orchestrator | 2025-06-02 20:00:48.684407 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-02 20:00:48.684415 | orchestrator | Monday 02 June 2025 19:56:30 +0000 (0:00:00.549) 0:00:40.847 *********** 2025-06-02 20:00:48.684422 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.684430 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.684437 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.684445 | orchestrator | 2025-06-02 20:00:48.684453 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-02 20:00:48.684461 | orchestrator | Monday 02 June 2025 19:56:31 +0000 (0:00:00.594) 0:00:41.442 *********** 2025-06-02 20:00:48.684468 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.684476 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.684484 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.684491 | orchestrator | 2025-06-02 20:00:48.684499 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-02 20:00:48.684507 | orchestrator | Monday 02 June 2025 19:56:33 +0000 (0:00:02.595) 0:00:44.037 *********** 2025-06-02 20:00:48.684521 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 20:00:48.684530 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 20:00:48.684538 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 20:00:48.684546 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 20:00:48.685045 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 20:00:48.685065 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 20:00:48.685072 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 20:00:48.685078 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 20:00:48.685085 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 20:00:48.685091 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 20:00:48.685098 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 20:00:48.685105 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 20:00:48.685111 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 20:00:48.685118 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 20:00:48.685125 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 20:00:48.685131 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.685138 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.685145 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.685151 | orchestrator | 2025-06-02 20:00:48.685158 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-02 20:00:48.685165 | orchestrator | Monday 02 June 2025 19:57:29 +0000 (0:00:55.889) 0:01:39.927 *********** 2025-06-02 20:00:48.685171 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.685178 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.685185 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.685191 | orchestrator | 2025-06-02 20:00:48.685198 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-02 20:00:48.685204 | orchestrator | Monday 02 June 2025 19:57:30 +0000 (0:00:00.725) 0:01:40.652 *********** 2025-06-02 20:00:48.685211 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.685217 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.685224 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.685230 | orchestrator | 2025-06-02 20:00:48.685237 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-02 20:00:48.685266 | orchestrator | Monday 02 June 2025 19:57:31 +0000 (0:00:01.304) 0:01:41.957 *********** 2025-06-02 20:00:48.685275 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.685282 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.685288 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.685295 | orchestrator | 2025-06-02 20:00:48.685301 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-02 20:00:48.685308 | orchestrator | Monday 02 June 2025 19:57:32 +0000 (0:00:01.322) 0:01:43.279 *********** 2025-06-02 20:00:48.685314 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.685321 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.685328 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.685334 | orchestrator | 2025-06-02 20:00:48.685341 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-02 20:00:48.685347 | orchestrator | Monday 02 June 2025 19:57:49 +0000 (0:00:16.374) 0:01:59.654 *********** 2025-06-02 20:00:48.685354 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.685361 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.685371 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.685377 | orchestrator | 2025-06-02 20:00:48.685384 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-02 20:00:48.685391 | orchestrator | Monday 02 June 2025 19:57:49 +0000 (0:00:00.604) 0:02:00.258 *********** 2025-06-02 20:00:48.685397 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.685404 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.685410 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.685417 | orchestrator | 2025-06-02 20:00:48.685423 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-02 20:00:48.685430 | orchestrator | Monday 02 June 2025 19:57:50 +0000 (0:00:00.602) 0:02:00.861 *********** 2025-06-02 20:00:48.685436 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.685443 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.685449 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.685456 | orchestrator | 2025-06-02 20:00:48.685469 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-02 20:00:48.685476 | orchestrator | Monday 02 June 2025 19:57:51 +0000 (0:00:00.575) 0:02:01.437 *********** 2025-06-02 20:00:48.685487 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.685494 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.685500 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.685507 | orchestrator | 2025-06-02 20:00:48.685513 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-02 20:00:48.685520 | orchestrator | Monday 02 June 2025 19:57:51 +0000 (0:00:00.828) 0:02:02.266 *********** 2025-06-02 20:00:48.685526 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.685533 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.685539 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.685546 | orchestrator | 2025-06-02 20:00:48.685552 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-02 20:00:48.685559 | orchestrator | Monday 02 June 2025 19:57:52 +0000 (0:00:00.267) 0:02:02.533 *********** 2025-06-02 20:00:48.685566 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.685572 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.685579 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.685585 | orchestrator | 2025-06-02 20:00:48.685592 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-02 20:00:48.685598 | orchestrator | Monday 02 June 2025 19:57:52 +0000 (0:00:00.608) 0:02:03.141 *********** 2025-06-02 20:00:48.685605 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.685611 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.685618 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.685624 | orchestrator | 2025-06-02 20:00:48.685631 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-02 20:00:48.685637 | orchestrator | Monday 02 June 2025 19:57:53 +0000 (0:00:00.622) 0:02:03.764 *********** 2025-06-02 20:00:48.685644 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.685650 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.685657 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.685663 | orchestrator | 2025-06-02 20:00:48.685670 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-02 20:00:48.685677 | orchestrator | Monday 02 June 2025 19:57:54 +0000 (0:00:01.052) 0:02:04.816 *********** 2025-06-02 20:00:48.685683 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:00:48.685689 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:00:48.685696 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:00:48.685703 | orchestrator | 2025-06-02 20:00:48.685709 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-02 20:00:48.685716 | orchestrator | Monday 02 June 2025 19:57:55 +0000 (0:00:00.782) 0:02:05.598 *********** 2025-06-02 20:00:48.685723 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.685729 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.685735 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.685747 | orchestrator | 2025-06-02 20:00:48.685753 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-02 20:00:48.685760 | orchestrator | Monday 02 June 2025 19:57:55 +0000 (0:00:00.258) 0:02:05.857 *********** 2025-06-02 20:00:48.685766 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.685773 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.685779 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.685786 | orchestrator | 2025-06-02 20:00:48.685792 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-02 20:00:48.685799 | orchestrator | Monday 02 June 2025 19:57:55 +0000 (0:00:00.266) 0:02:06.124 *********** 2025-06-02 20:00:48.685805 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.685812 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.685818 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.685825 | orchestrator | 2025-06-02 20:00:48.685831 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-02 20:00:48.685838 | orchestrator | Monday 02 June 2025 19:57:56 +0000 (0:00:00.894) 0:02:07.018 *********** 2025-06-02 20:00:48.685844 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.685851 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.685858 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.685864 | orchestrator | 2025-06-02 20:00:48.685870 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-02 20:00:48.685877 | orchestrator | Monday 02 June 2025 19:57:57 +0000 (0:00:00.544) 0:02:07.562 *********** 2025-06-02 20:00:48.685884 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 20:00:48.685890 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 20:00:48.685896 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 20:00:48.685903 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 20:00:48.685910 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 20:00:48.685916 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 20:00:48.685923 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 20:00:48.685929 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 20:00:48.685936 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 20:00:48.685942 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-02 20:00:48.685949 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 20:00:48.685955 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 20:00:48.685966 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-02 20:00:48.685977 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 20:00:48.685984 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 20:00:48.685990 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 20:00:48.685997 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 20:00:48.686003 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 20:00:48.686010 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 20:00:48.686056 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 20:00:48.686070 | orchestrator | 2025-06-02 20:00:48.686077 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-02 20:00:48.686084 | orchestrator | 2025-06-02 20:00:48.686090 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-02 20:00:48.686097 | orchestrator | Monday 02 June 2025 19:58:00 +0000 (0:00:03.011) 0:02:10.574 *********** 2025-06-02 20:00:48.686104 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:00:48.686110 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:00:48.686117 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:00:48.686124 | orchestrator | 2025-06-02 20:00:48.686131 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-02 20:00:48.686137 | orchestrator | Monday 02 June 2025 19:58:00 +0000 (0:00:00.364) 0:02:10.939 *********** 2025-06-02 20:00:48.686144 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:00:48.686150 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:00:48.686157 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:00:48.686163 | orchestrator | 2025-06-02 20:00:48.686170 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-02 20:00:48.686177 | orchestrator | Monday 02 June 2025 19:58:01 +0000 (0:00:00.600) 0:02:11.540 *********** 2025-06-02 20:00:48.686183 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:00:48.686190 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:00:48.686197 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:00:48.686203 | orchestrator | 2025-06-02 20:00:48.686210 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-02 20:00:48.686217 | orchestrator | Monday 02 June 2025 19:58:01 +0000 (0:00:00.290) 0:02:11.831 *********** 2025-06-02 20:00:48.686224 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:00:48.686230 | orchestrator | 2025-06-02 20:00:48.686237 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-02 20:00:48.686262 | orchestrator | Monday 02 June 2025 19:58:02 +0000 (0:00:00.608) 0:02:12.439 *********** 2025-06-02 20:00:48.686274 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.686286 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.686298 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.686310 | orchestrator | 2025-06-02 20:00:48.686321 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-02 20:00:48.686332 | orchestrator | Monday 02 June 2025 19:58:02 +0000 (0:00:00.269) 0:02:12.709 *********** 2025-06-02 20:00:48.686339 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.686346 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.686352 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.686359 | orchestrator | 2025-06-02 20:00:48.686365 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-02 20:00:48.686372 | orchestrator | Monday 02 June 2025 19:58:02 +0000 (0:00:00.286) 0:02:12.995 *********** 2025-06-02 20:00:48.686378 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.686385 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.686391 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.686398 | orchestrator | 2025-06-02 20:00:48.686404 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-02 20:00:48.686411 | orchestrator | Monday 02 June 2025 19:58:02 +0000 (0:00:00.278) 0:02:13.274 *********** 2025-06-02 20:00:48.686417 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:48.686424 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:48.686430 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:48.686437 | orchestrator | 2025-06-02 20:00:48.686443 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-02 20:00:48.686450 | orchestrator | Monday 02 June 2025 19:58:04 +0000 (0:00:01.651) 0:02:14.926 *********** 2025-06-02 20:00:48.686456 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:00:48.686463 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:00:48.686478 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:00:48.686485 | orchestrator | 2025-06-02 20:00:48.686491 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 20:00:48.686498 | orchestrator | 2025-06-02 20:00:48.686504 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 20:00:48.686511 | orchestrator | Monday 02 June 2025 19:58:13 +0000 (0:00:08.721) 0:02:23.647 *********** 2025-06-02 20:00:48.686517 | orchestrator | ok: [testbed-manager] 2025-06-02 20:00:48.686524 | orchestrator | 2025-06-02 20:00:48.686530 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 20:00:48.686537 | orchestrator | Monday 02 June 2025 19:58:14 +0000 (0:00:00.750) 0:02:24.397 *********** 2025-06-02 20:00:48.686543 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:48.686549 | orchestrator | 2025-06-02 20:00:48.686556 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 20:00:48.686563 | orchestrator | Monday 02 June 2025 19:58:14 +0000 (0:00:00.386) 0:02:24.784 *********** 2025-06-02 20:00:48.686569 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 20:00:48.686576 | orchestrator | 2025-06-02 20:00:48.686587 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 20:00:48.686594 | orchestrator | Monday 02 June 2025 19:58:15 +0000 (0:00:00.828) 0:02:25.613 *********** 2025-06-02 20:00:48.686600 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:48.686607 | orchestrator | 2025-06-02 20:00:48.686617 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 20:00:48.686623 | orchestrator | Monday 02 June 2025 19:58:15 +0000 (0:00:00.725) 0:02:26.339 *********** 2025-06-02 20:00:48.686630 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:48.686636 | orchestrator | 2025-06-02 20:00:48.686643 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 20:00:48.686649 | orchestrator | Monday 02 June 2025 19:58:16 +0000 (0:00:00.506) 0:02:26.846 *********** 2025-06-02 20:00:48.686656 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 20:00:48.686662 | orchestrator | 2025-06-02 20:00:48.686669 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 20:00:48.686675 | orchestrator | Monday 02 June 2025 19:58:17 +0000 (0:00:01.488) 0:02:28.334 *********** 2025-06-02 20:00:48.686682 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 20:00:48.686688 | orchestrator | 2025-06-02 20:00:48.686694 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 20:00:48.686701 | orchestrator | Monday 02 June 2025 19:58:18 +0000 (0:00:00.775) 0:02:29.110 *********** 2025-06-02 20:00:48.686707 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:48.686714 | orchestrator | 2025-06-02 20:00:48.686720 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 20:00:48.686727 | orchestrator | Monday 02 June 2025 19:58:19 +0000 (0:00:00.340) 0:02:29.450 *********** 2025-06-02 20:00:48.686733 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:48.686739 | orchestrator | 2025-06-02 20:00:48.686746 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-02 20:00:48.686752 | orchestrator | 2025-06-02 20:00:48.686759 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-02 20:00:48.686765 | orchestrator | Monday 02 June 2025 19:58:19 +0000 (0:00:00.367) 0:02:29.818 *********** 2025-06-02 20:00:48.686772 | orchestrator | ok: [testbed-manager] 2025-06-02 20:00:48.686778 | orchestrator | 2025-06-02 20:00:48.686785 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-02 20:00:48.686791 | orchestrator | Monday 02 June 2025 19:58:19 +0000 (0:00:00.123) 0:02:29.941 *********** 2025-06-02 20:00:48.686798 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 20:00:48.686804 | orchestrator | 2025-06-02 20:00:48.686811 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-02 20:00:48.686822 | orchestrator | Monday 02 June 2025 19:58:19 +0000 (0:00:00.320) 0:02:30.262 *********** 2025-06-02 20:00:48.686829 | orchestrator | ok: [testbed-manager] 2025-06-02 20:00:48.686835 | orchestrator | 2025-06-02 20:00:48.686841 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-02 20:00:48.686848 | orchestrator | Monday 02 June 2025 19:58:20 +0000 (0:00:00.735) 0:02:30.998 *********** 2025-06-02 20:00:48.686854 | orchestrator | ok: [testbed-manager] 2025-06-02 20:00:48.686861 | orchestrator | 2025-06-02 20:00:48.686867 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-02 20:00:48.686874 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:01.271) 0:02:32.270 *********** 2025-06-02 20:00:48.686880 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:48.686887 | orchestrator | 2025-06-02 20:00:48.686893 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-02 20:00:48.686900 | orchestrator | Monday 02 June 2025 19:58:22 +0000 (0:00:00.662) 0:02:32.932 *********** 2025-06-02 20:00:48.686906 | orchestrator | ok: [testbed-manager] 2025-06-02 20:00:48.686912 | orchestrator | 2025-06-02 20:00:48.686919 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-02 20:00:48.686926 | orchestrator | Monday 02 June 2025 19:58:22 +0000 (0:00:00.340) 0:02:33.273 *********** 2025-06-02 20:00:48.686932 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:48.686939 | orchestrator | 2025-06-02 20:00:48.686945 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-02 20:00:48.686952 | orchestrator | Monday 02 June 2025 19:58:29 +0000 (0:00:06.106) 0:02:39.380 *********** 2025-06-02 20:00:48.686958 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:48.686965 | orchestrator | 2025-06-02 20:00:48.686971 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-02 20:00:48.686978 | orchestrator | Monday 02 June 2025 19:58:39 +0000 (0:00:10.911) 0:02:50.292 *********** 2025-06-02 20:00:48.686984 | orchestrator | ok: [testbed-manager] 2025-06-02 20:00:48.686990 | orchestrator | 2025-06-02 20:00:48.686997 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-02 20:00:48.687033 | orchestrator | 2025-06-02 20:00:48.687040 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-02 20:00:48.687047 | orchestrator | Monday 02 June 2025 19:58:40 +0000 (0:00:00.471) 0:02:50.764 *********** 2025-06-02 20:00:48.687053 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.687060 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.687067 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.687073 | orchestrator | 2025-06-02 20:00:48.687080 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-02 20:00:48.687087 | orchestrator | Monday 02 June 2025 19:58:40 +0000 (0:00:00.492) 0:02:51.256 *********** 2025-06-02 20:00:48.687093 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.687100 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.687106 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.687113 | orchestrator | 2025-06-02 20:00:48.687120 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-02 20:00:48.687127 | orchestrator | Monday 02 June 2025 19:58:41 +0000 (0:00:00.335) 0:02:51.592 *********** 2025-06-02 20:00:48.687133 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:00:48.687140 | orchestrator | 2025-06-02 20:00:48.687147 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-02 20:00:48.687158 | orchestrator | Monday 02 June 2025 19:58:41 +0000 (0:00:00.456) 0:02:52.049 *********** 2025-06-02 20:00:48.687165 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 20:00:48.687172 | orchestrator | 2025-06-02 20:00:48.687182 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-02 20:00:48.687189 | orchestrator | Monday 02 June 2025 19:58:42 +0000 (0:00:00.937) 0:02:52.987 *********** 2025-06-02 20:00:48.687196 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:00:48.687207 | orchestrator | 2025-06-02 20:00:48.687214 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-02 20:00:48.687221 | orchestrator | Monday 02 June 2025 19:58:43 +0000 (0:00:00.759) 0:02:53.746 *********** 2025-06-02 20:00:48.687228 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.687234 | orchestrator | 2025-06-02 20:00:48.687241 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-02 20:00:48.687296 | orchestrator | Monday 02 June 2025 19:58:43 +0000 (0:00:00.213) 0:02:53.959 *********** 2025-06-02 20:00:48.687304 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:00:48.687310 | orchestrator | 2025-06-02 20:00:48.687317 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-02 20:00:48.687324 | orchestrator | Monday 02 June 2025 19:58:44 +0000 (0:00:00.906) 0:02:54.866 *********** 2025-06-02 20:00:48.687330 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.687337 | orchestrator | 2025-06-02 20:00:48.687343 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-02 20:00:48.687350 | orchestrator | Monday 02 June 2025 19:58:44 +0000 (0:00:00.223) 0:02:55.089 *********** 2025-06-02 20:00:48.687356 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.687363 | orchestrator | 2025-06-02 20:00:48.687369 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-02 20:00:48.687376 | orchestrator | Monday 02 June 2025 19:58:44 +0000 (0:00:00.142) 0:02:55.232 *********** 2025-06-02 20:00:48.687382 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.687389 | orchestrator | 2025-06-02 20:00:48.687395 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-02 20:00:48.687402 | orchestrator | Monday 02 June 2025 19:58:45 +0000 (0:00:00.164) 0:02:55.396 *********** 2025-06-02 20:00:48.687409 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.687415 | orchestrator | 2025-06-02 20:00:48.687422 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-02 20:00:48.687428 | orchestrator | Monday 02 June 2025 19:58:45 +0000 (0:00:00.212) 0:02:55.609 *********** 2025-06-02 20:00:48.687435 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 20:00:48.687441 | orchestrator | 2025-06-02 20:00:48.687448 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-02 20:00:48.687454 | orchestrator | Monday 02 June 2025 19:58:50 +0000 (0:00:04.776) 0:03:00.385 *********** 2025-06-02 20:00:48.687461 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-02 20:00:48.687467 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-02 20:00:48.687474 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (29 retries left). 2025-06-02 20:00:48.687481 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-02 20:00:48.687487 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-02 20:00:48.687494 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-02 20:00:48.687500 | orchestrator | 2025-06-02 20:00:48.687507 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-02 20:00:48.687514 | orchestrator | Monday 02 June 2025 20:00:18 +0000 (0:01:28.849) 0:04:29.235 *********** 2025-06-02 20:00:48.687521 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:00:48.687527 | orchestrator | 2025-06-02 20:00:48.687534 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-02 20:00:48.687541 | orchestrator | Monday 02 June 2025 20:00:20 +0000 (0:00:01.253) 0:04:30.489 *********** 2025-06-02 20:00:48.687547 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 20:00:48.687554 | orchestrator | 2025-06-02 20:00:48.687560 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-02 20:00:48.687567 | orchestrator | Monday 02 June 2025 20:00:21 +0000 (0:00:01.612) 0:04:32.102 *********** 2025-06-02 20:00:48.687580 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 20:00:48.687587 | orchestrator | 2025-06-02 20:00:48.687594 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-02 20:00:48.687600 | orchestrator | Monday 02 June 2025 20:00:23 +0000 (0:00:01.618) 0:04:33.721 *********** 2025-06-02 20:00:48.687607 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.687613 | orchestrator | 2025-06-02 20:00:48.687620 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-02 20:00:48.687627 | orchestrator | Monday 02 June 2025 20:00:23 +0000 (0:00:00.225) 0:04:33.946 *********** 2025-06-02 20:00:48.687633 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-02 20:00:48.687640 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-02 20:00:48.687647 | orchestrator | 2025-06-02 20:00:48.687653 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-02 20:00:48.687660 | orchestrator | Monday 02 June 2025 20:00:25 +0000 (0:00:02.060) 0:04:36.006 *********** 2025-06-02 20:00:48.687666 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.687673 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.687680 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.687686 | orchestrator | 2025-06-02 20:00:48.687693 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-02 20:00:48.687700 | orchestrator | Monday 02 June 2025 20:00:25 +0000 (0:00:00.320) 0:04:36.327 *********** 2025-06-02 20:00:48.687711 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.687718 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.687724 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.687731 | orchestrator | 2025-06-02 20:00:48.687737 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-02 20:00:48.687744 | orchestrator | 2025-06-02 20:00:48.687755 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-02 20:00:48.687761 | orchestrator | Monday 02 June 2025 20:00:27 +0000 (0:00:01.040) 0:04:37.368 *********** 2025-06-02 20:00:48.687768 | orchestrator | ok: [testbed-manager] 2025-06-02 20:00:48.687774 | orchestrator | 2025-06-02 20:00:48.687781 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-02 20:00:48.687787 | orchestrator | Monday 02 June 2025 20:00:27 +0000 (0:00:00.355) 0:04:37.723 *********** 2025-06-02 20:00:48.687794 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 20:00:48.687800 | orchestrator | 2025-06-02 20:00:48.687807 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-02 20:00:48.687813 | orchestrator | Monday 02 June 2025 20:00:27 +0000 (0:00:00.229) 0:04:37.952 *********** 2025-06-02 20:00:48.687820 | orchestrator | changed: [testbed-manager] 2025-06-02 20:00:48.687826 | orchestrator | 2025-06-02 20:00:48.687833 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-02 20:00:48.687839 | orchestrator | 2025-06-02 20:00:48.687846 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-02 20:00:48.687852 | orchestrator | Monday 02 June 2025 20:00:33 +0000 (0:00:06.176) 0:04:44.129 *********** 2025-06-02 20:00:48.687859 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:00:48.687866 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:00:48.687872 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:00:48.687878 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:00:48.687884 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:00:48.687890 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:00:48.687896 | orchestrator | 2025-06-02 20:00:48.687902 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-02 20:00:48.687908 | orchestrator | Monday 02 June 2025 20:00:34 +0000 (0:00:00.882) 0:04:45.011 *********** 2025-06-02 20:00:48.687914 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 20:00:48.687925 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 20:00:48.687932 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 20:00:48.687938 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 20:00:48.687944 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 20:00:48.687950 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 20:00:48.687956 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 20:00:48.687962 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 20:00:48.687968 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 20:00:48.687974 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 20:00:48.687980 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 20:00:48.687986 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 20:00:48.687992 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 20:00:48.687999 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 20:00:48.688005 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 20:00:48.688016 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 20:00:48.688026 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 20:00:48.688036 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 20:00:48.688046 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 20:00:48.688058 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 20:00:48.688069 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 20:00:48.688080 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 20:00:48.688089 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 20:00:48.688095 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 20:00:48.688101 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 20:00:48.688108 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 20:00:48.688119 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 20:00:48.688128 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 20:00:48.688138 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 20:00:48.688154 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 20:00:48.688165 | orchestrator | 2025-06-02 20:00:48.688175 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-02 20:00:48.688189 | orchestrator | Monday 02 June 2025 20:00:45 +0000 (0:00:10.932) 0:04:55.944 *********** 2025-06-02 20:00:48.688200 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.688211 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.688221 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.688231 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.688240 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.688271 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.688278 | orchestrator | 2025-06-02 20:00:48.688284 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-02 20:00:48.688290 | orchestrator | Monday 02 June 2025 20:00:45 +0000 (0:00:00.397) 0:04:56.341 *********** 2025-06-02 20:00:48.688296 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:00:48.688302 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:00:48.688308 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:00:48.688314 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:00:48.688320 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:00:48.688326 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:00:48.688332 | orchestrator | 2025-06-02 20:00:48.688338 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:00:48.688344 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:00:48.688353 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-02 20:00:48.688359 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 20:00:48.688366 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 20:00:48.688372 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 20:00:48.688378 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 20:00:48.688384 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 20:00:48.688390 | orchestrator | 2025-06-02 20:00:48.688397 | orchestrator | 2025-06-02 20:00:48.688403 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:00:48.688409 | orchestrator | Monday 02 June 2025 20:00:46 +0000 (0:00:00.473) 0:04:56.815 *********** 2025-06-02 20:00:48.688415 | orchestrator | =============================================================================== 2025-06-02 20:00:48.688423 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 88.85s 2025-06-02 20:00:48.688434 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.89s 2025-06-02 20:00:48.688445 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 16.37s 2025-06-02 20:00:48.688455 | orchestrator | Manage labels ---------------------------------------------------------- 10.93s 2025-06-02 20:00:48.688465 | orchestrator | kubectl : Install required packages ------------------------------------ 10.91s 2025-06-02 20:00:48.688473 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 9.55s 2025-06-02 20:00:48.688480 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.72s 2025-06-02 20:00:48.688486 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.18s 2025-06-02 20:00:48.688492 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.11s 2025-06-02 20:00:48.688498 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.78s 2025-06-02 20:00:48.688505 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.01s 2025-06-02 20:00:48.688511 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.94s 2025-06-02 20:00:48.688517 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.85s 2025-06-02 20:00:48.688523 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.60s 2025-06-02 20:00:48.688533 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.06s 2025-06-02 20:00:48.688540 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.98s 2025-06-02 20:00:48.688546 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.73s 2025-06-02 20:00:48.688552 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.65s 2025-06-02 20:00:48.688558 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 1.62s 2025-06-02 20:00:48.688564 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.61s 2025-06-02 20:00:48.688570 | orchestrator | 2025-06-02 20:00:48 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:48.688581 | orchestrator | 2025-06-02 20:00:48 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:48.688681 | orchestrator | 2025-06-02 20:00:48 | INFO  | Task 06fb9282-2848-4bcc-ab38-69bc984f0afc is in state STARTED 2025-06-02 20:00:48.688692 | orchestrator | 2025-06-02 20:00:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:51.711142 | orchestrator | 2025-06-02 20:00:51 | INFO  | Task bfa6412a-e0af-41ef-8074-aea7a9bc6751 is in state STARTED 2025-06-02 20:00:51.711548 | orchestrator | 2025-06-02 20:00:51 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:51.712882 | orchestrator | 2025-06-02 20:00:51 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:51.715462 | orchestrator | 2025-06-02 20:00:51 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:51.719292 | orchestrator | 2025-06-02 20:00:51 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:51.719348 | orchestrator | 2025-06-02 20:00:51 | INFO  | Task 06fb9282-2848-4bcc-ab38-69bc984f0afc is in state STARTED 2025-06-02 20:00:51.719357 | orchestrator | 2025-06-02 20:00:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:54.762062 | orchestrator | 2025-06-02 20:00:54 | INFO  | Task bfa6412a-e0af-41ef-8074-aea7a9bc6751 is in state SUCCESS 2025-06-02 20:00:54.762205 | orchestrator | 2025-06-02 20:00:54 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:54.763866 | orchestrator | 2025-06-02 20:00:54 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:54.766750 | orchestrator | 2025-06-02 20:00:54 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:54.767212 | orchestrator | 2025-06-02 20:00:54 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:54.768178 | orchestrator | 2025-06-02 20:00:54 | INFO  | Task 06fb9282-2848-4bcc-ab38-69bc984f0afc is in state STARTED 2025-06-02 20:00:54.768214 | orchestrator | 2025-06-02 20:00:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:00:57.797760 | orchestrator | 2025-06-02 20:00:57 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:00:57.797889 | orchestrator | 2025-06-02 20:00:57 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:00:57.797914 | orchestrator | 2025-06-02 20:00:57 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:00:57.801279 | orchestrator | 2025-06-02 20:00:57 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:00:57.801685 | orchestrator | 2025-06-02 20:00:57 | INFO  | Task 06fb9282-2848-4bcc-ab38-69bc984f0afc is in state SUCCESS 2025-06-02 20:00:57.801756 | orchestrator | 2025-06-02 20:00:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:00.823467 | orchestrator | 2025-06-02 20:01:00 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state STARTED 2025-06-02 20:01:00.823937 | orchestrator | 2025-06-02 20:01:00 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:00.825661 | orchestrator | 2025-06-02 20:01:00 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:00.826490 | orchestrator | 2025-06-02 20:01:00 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:00.826611 | orchestrator | 2025-06-02 20:01:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:03.859038 | orchestrator | 2025-06-02 20:01:03.859140 | orchestrator | 2025-06-02 20:01:03.859156 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-02 20:01:03.859169 | orchestrator | 2025-06-02 20:01:03.859181 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 20:01:03.859193 | orchestrator | Monday 02 June 2025 20:00:50 +0000 (0:00:00.146) 0:00:00.146 *********** 2025-06-02 20:01:03.859205 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 20:01:03.859217 | orchestrator | 2025-06-02 20:01:03.859228 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 20:01:03.859265 | orchestrator | Monday 02 June 2025 20:00:51 +0000 (0:00:00.729) 0:00:00.876 *********** 2025-06-02 20:01:03.859277 | orchestrator | changed: [testbed-manager] 2025-06-02 20:01:03.859288 | orchestrator | 2025-06-02 20:01:03.859299 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-02 20:01:03.859310 | orchestrator | Monday 02 June 2025 20:00:52 +0000 (0:00:01.092) 0:00:01.968 *********** 2025-06-02 20:01:03.859321 | orchestrator | changed: [testbed-manager] 2025-06-02 20:01:03.859331 | orchestrator | 2025-06-02 20:01:03.859342 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:01:03.859353 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:01:03.859366 | orchestrator | 2025-06-02 20:01:03.859377 | orchestrator | 2025-06-02 20:01:03.859387 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:01:03.859415 | orchestrator | Monday 02 June 2025 20:00:52 +0000 (0:00:00.500) 0:00:02.469 *********** 2025-06-02 20:01:03.859427 | orchestrator | =============================================================================== 2025-06-02 20:01:03.859439 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.09s 2025-06-02 20:01:03.859451 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2025-06-02 20:01:03.859462 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2025-06-02 20:01:03.859474 | orchestrator | 2025-06-02 20:01:03.859485 | orchestrator | 2025-06-02 20:01:03.859497 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 20:01:03.859508 | orchestrator | 2025-06-02 20:01:03.859520 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 20:01:03.859531 | orchestrator | Monday 02 June 2025 20:00:49 +0000 (0:00:00.144) 0:00:00.144 *********** 2025-06-02 20:01:03.859543 | orchestrator | ok: [testbed-manager] 2025-06-02 20:01:03.859558 | orchestrator | 2025-06-02 20:01:03.859573 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 20:01:03.859587 | orchestrator | Monday 02 June 2025 20:00:50 +0000 (0:00:00.468) 0:00:00.612 *********** 2025-06-02 20:01:03.859604 | orchestrator | ok: [testbed-manager] 2025-06-02 20:01:03.859622 | orchestrator | 2025-06-02 20:01:03.859642 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 20:01:03.859660 | orchestrator | Monday 02 June 2025 20:00:50 +0000 (0:00:00.489) 0:00:01.102 *********** 2025-06-02 20:01:03.859678 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 20:01:03.859724 | orchestrator | 2025-06-02 20:01:03.859743 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 20:01:03.859761 | orchestrator | Monday 02 June 2025 20:00:51 +0000 (0:00:00.722) 0:00:01.824 *********** 2025-06-02 20:01:03.859780 | orchestrator | changed: [testbed-manager] 2025-06-02 20:01:03.859801 | orchestrator | 2025-06-02 20:01:03.859823 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 20:01:03.859845 | orchestrator | Monday 02 June 2025 20:00:52 +0000 (0:00:01.063) 0:00:02.888 *********** 2025-06-02 20:01:03.859859 | orchestrator | changed: [testbed-manager] 2025-06-02 20:01:03.859871 | orchestrator | 2025-06-02 20:01:03.859884 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 20:01:03.859898 | orchestrator | Monday 02 June 2025 20:00:53 +0000 (0:00:00.726) 0:00:03.615 *********** 2025-06-02 20:01:03.859910 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 20:01:03.859921 | orchestrator | 2025-06-02 20:01:03.859932 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 20:01:03.859942 | orchestrator | Monday 02 June 2025 20:00:54 +0000 (0:00:01.533) 0:00:05.148 *********** 2025-06-02 20:01:03.859953 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 20:01:03.859964 | orchestrator | 2025-06-02 20:01:03.859975 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 20:01:03.859985 | orchestrator | Monday 02 June 2025 20:00:55 +0000 (0:00:00.886) 0:00:06.035 *********** 2025-06-02 20:01:03.859996 | orchestrator | ok: [testbed-manager] 2025-06-02 20:01:03.860007 | orchestrator | 2025-06-02 20:01:03.860017 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 20:01:03.860028 | orchestrator | Monday 02 June 2025 20:00:56 +0000 (0:00:00.386) 0:00:06.422 *********** 2025-06-02 20:01:03.860039 | orchestrator | ok: [testbed-manager] 2025-06-02 20:01:03.860052 | orchestrator | 2025-06-02 20:01:03.860074 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:01:03.860102 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:01:03.860120 | orchestrator | 2025-06-02 20:01:03.860137 | orchestrator | 2025-06-02 20:01:03.860155 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:01:03.860175 | orchestrator | Monday 02 June 2025 20:00:56 +0000 (0:00:00.294) 0:00:06.717 *********** 2025-06-02 20:01:03.860193 | orchestrator | =============================================================================== 2025-06-02 20:01:03.860211 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.53s 2025-06-02 20:01:03.860223 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.06s 2025-06-02 20:01:03.860234 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.89s 2025-06-02 20:01:03.860299 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.73s 2025-06-02 20:01:03.860312 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2025-06-02 20:01:03.860322 | orchestrator | Create .kube directory -------------------------------------------------- 0.49s 2025-06-02 20:01:03.860333 | orchestrator | Get home directory of operator user ------------------------------------- 0.47s 2025-06-02 20:01:03.860344 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.39s 2025-06-02 20:01:03.860354 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2025-06-02 20:01:03.860365 | orchestrator | 2025-06-02 20:01:03.860376 | orchestrator | 2025-06-02 20:01:03.860387 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-02 20:01:03.860397 | orchestrator | 2025-06-02 20:01:03.860408 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 20:01:03.860419 | orchestrator | Monday 02 June 2025 19:58:46 +0000 (0:00:00.105) 0:00:00.105 *********** 2025-06-02 20:01:03.860442 | orchestrator | ok: [localhost] => { 2025-06-02 20:01:03.860454 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-02 20:01:03.860465 | orchestrator | } 2025-06-02 20:01:03.860477 | orchestrator | 2025-06-02 20:01:03.860487 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-02 20:01:03.860498 | orchestrator | Monday 02 June 2025 19:58:46 +0000 (0:00:00.040) 0:00:00.145 *********** 2025-06-02 20:01:03.860518 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-02 20:01:03.860530 | orchestrator | ...ignoring 2025-06-02 20:01:03.860541 | orchestrator | 2025-06-02 20:01:03.860552 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-02 20:01:03.860562 | orchestrator | Monday 02 June 2025 19:58:49 +0000 (0:00:03.109) 0:00:03.255 *********** 2025-06-02 20:01:03.860573 | orchestrator | skipping: [localhost] 2025-06-02 20:01:03.860584 | orchestrator | 2025-06-02 20:01:03.860594 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-02 20:01:03.860605 | orchestrator | Monday 02 June 2025 19:58:49 +0000 (0:00:00.063) 0:00:03.318 *********** 2025-06-02 20:01:03.860616 | orchestrator | ok: [localhost] 2025-06-02 20:01:03.860627 | orchestrator | 2025-06-02 20:01:03.860637 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:01:03.860648 | orchestrator | 2025-06-02 20:01:03.860659 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:01:03.860669 | orchestrator | Monday 02 June 2025 19:58:50 +0000 (0:00:00.265) 0:00:03.584 *********** 2025-06-02 20:01:03.860680 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:03.860691 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:03.860702 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:03.860712 | orchestrator | 2025-06-02 20:01:03.860723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:01:03.860734 | orchestrator | Monday 02 June 2025 19:58:50 +0000 (0:00:00.865) 0:00:04.450 *********** 2025-06-02 20:01:03.860745 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-02 20:01:03.860756 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-02 20:01:03.860766 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-02 20:01:03.860777 | orchestrator | 2025-06-02 20:01:03.860788 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-02 20:01:03.860798 | orchestrator | 2025-06-02 20:01:03.860809 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 20:01:03.860819 | orchestrator | Monday 02 June 2025 19:58:51 +0000 (0:00:00.727) 0:00:05.177 *********** 2025-06-02 20:01:03.860832 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:01:03.860843 | orchestrator | 2025-06-02 20:01:03.860853 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 20:01:03.860864 | orchestrator | Monday 02 June 2025 19:58:52 +0000 (0:00:01.240) 0:00:06.417 *********** 2025-06-02 20:01:03.860875 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:03.860885 | orchestrator | 2025-06-02 20:01:03.860896 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-02 20:01:03.860907 | orchestrator | Monday 02 June 2025 19:58:54 +0000 (0:00:01.467) 0:00:07.888 *********** 2025-06-02 20:01:03.860918 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:03.860929 | orchestrator | 2025-06-02 20:01:03.860939 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-02 20:01:03.860950 | orchestrator | Monday 02 June 2025 19:58:54 +0000 (0:00:00.491) 0:00:08.380 *********** 2025-06-02 20:01:03.860961 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:03.860971 | orchestrator | 2025-06-02 20:01:03.860982 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-02 20:01:03.861000 | orchestrator | Monday 02 June 2025 19:58:55 +0000 (0:00:00.533) 0:00:08.914 *********** 2025-06-02 20:01:03.861011 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:03.861022 | orchestrator | 2025-06-02 20:01:03.861033 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-02 20:01:03.861044 | orchestrator | Monday 02 June 2025 19:58:55 +0000 (0:00:00.378) 0:00:09.293 *********** 2025-06-02 20:01:03.861054 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:03.861065 | orchestrator | 2025-06-02 20:01:03.861076 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 20:01:03.861087 | orchestrator | Monday 02 June 2025 19:58:56 +0000 (0:00:00.609) 0:00:09.902 *********** 2025-06-02 20:01:03.861098 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:01:03.861109 | orchestrator | 2025-06-02 20:01:03.861119 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 20:01:03.861137 | orchestrator | Monday 02 June 2025 19:58:57 +0000 (0:00:01.164) 0:00:11.067 *********** 2025-06-02 20:01:03.861148 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:03.861159 | orchestrator | 2025-06-02 20:01:03.861170 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-02 20:01:03.861180 | orchestrator | Monday 02 June 2025 19:58:58 +0000 (0:00:01.217) 0:00:12.285 *********** 2025-06-02 20:01:03.861191 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:03.861202 | orchestrator | 2025-06-02 20:01:03.861212 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-02 20:01:03.861223 | orchestrator | Monday 02 June 2025 19:58:59 +0000 (0:00:00.326) 0:00:12.611 *********** 2025-06-02 20:01:03.861233 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:03.861264 | orchestrator | 2025-06-02 20:01:03.861276 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-02 20:01:03.861286 | orchestrator | Monday 02 June 2025 19:58:59 +0000 (0:00:00.333) 0:00:12.945 *********** 2025-06-02 20:01:03.861307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:01:03.861325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:01:03.861346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:01:03.861359 | orchestrator | 2025-06-02 20:01:03.861370 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-02 20:01:03.861381 | orchestrator | Monday 02 June 2025 19:59:00 +0000 (0:00:00.918) 0:00:13.864 *********** 2025-06-02 20:01:03.861401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:01:03.861420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:01:03.861433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:01:03.861452 | orchestrator | 2025-06-02 20:01:03.861463 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-02 20:01:03.861474 | orchestrator | Monday 02 June 2025 19:59:02 +0000 (0:00:01.996) 0:00:15.860 *********** 2025-06-02 20:01:03.861485 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 20:01:03.861496 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 20:01:03.861507 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 20:01:03.861518 | orchestrator | 2025-06-02 20:01:03.861529 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-02 20:01:03.861540 | orchestrator | Monday 02 June 2025 19:59:03 +0000 (0:00:01.467) 0:00:17.328 *********** 2025-06-02 20:01:03.861551 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 20:01:03.861561 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 20:01:03.861572 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 20:01:03.861583 | orchestrator | 2025-06-02 20:01:03.861599 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-02 20:01:03.861610 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:02.461) 0:00:19.789 *********** 2025-06-02 20:01:03.861621 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 20:01:03.861632 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 20:01:03.861643 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 20:01:03.861653 | orchestrator | 2025-06-02 20:01:03.861664 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-02 20:01:03.861675 | orchestrator | Monday 02 June 2025 19:59:07 +0000 (0:00:01.644) 0:00:21.434 *********** 2025-06-02 20:01:03.861686 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 20:01:03.861697 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 20:01:03.861708 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 20:01:03.861718 | orchestrator | 2025-06-02 20:01:03.861729 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-02 20:01:03.861740 | orchestrator | Monday 02 June 2025 19:59:09 +0000 (0:00:01.895) 0:00:23.330 *********** 2025-06-02 20:01:03.861755 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 20:01:03.861766 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 20:01:03.861777 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 20:01:03.861788 | orchestrator | 2025-06-02 20:01:03.861799 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-02 20:01:03.861810 | orchestrator | Monday 02 June 2025 19:59:11 +0000 (0:00:01.625) 0:00:24.955 *********** 2025-06-02 20:01:03.861827 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 20:01:03.861838 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 20:01:03.861849 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 20:01:03.861860 | orchestrator | 2025-06-02 20:01:03.861870 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 20:01:03.861881 | orchestrator | Monday 02 June 2025 19:59:13 +0000 (0:00:01.558) 0:00:26.514 *********** 2025-06-02 20:01:03.861892 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:03.861903 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:03.861914 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:03.861925 | orchestrator | 2025-06-02 20:01:03.861935 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-02 20:01:03.861946 | orchestrator | Monday 02 June 2025 19:59:13 +0000 (0:00:00.378) 0:00:26.892 *********** 2025-06-02 20:01:03.861958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:01:03.861979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:01:03.861993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:01:03.862011 | orchestrator | 2025-06-02 20:01:03.862077 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-02 20:01:03.862088 | orchestrator | Monday 02 June 2025 19:59:15 +0000 (0:00:01.710) 0:00:28.604 *********** 2025-06-02 20:01:03.862099 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:03.862110 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:03.862194 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:03.862216 | orchestrator | 2025-06-02 20:01:03.862227 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-02 20:01:03.862307 | orchestrator | Monday 02 June 2025 19:59:16 +0000 (0:00:01.343) 0:00:29.948 *********** 2025-06-02 20:01:03.862320 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:03.862331 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:03.862342 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:03.862353 | orchestrator | 2025-06-02 20:01:03.862364 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-02 20:01:03.862375 | orchestrator | Monday 02 June 2025 19:59:23 +0000 (0:00:06.858) 0:00:36.806 *********** 2025-06-02 20:01:03.862385 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:03.862396 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:03.862407 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:03.862417 | orchestrator | 2025-06-02 20:01:03.862428 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 20:01:03.862439 | orchestrator | 2025-06-02 20:01:03.862450 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 20:01:03.862461 | orchestrator | Monday 02 June 2025 19:59:23 +0000 (0:00:00.313) 0:00:37.119 *********** 2025-06-02 20:01:03.862471 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:03.862482 | orchestrator | 2025-06-02 20:01:03.862493 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 20:01:03.862504 | orchestrator | Monday 02 June 2025 19:59:24 +0000 (0:00:00.654) 0:00:37.774 *********** 2025-06-02 20:01:03.862514 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:03.862525 | orchestrator | 2025-06-02 20:01:03.862535 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 20:01:03.862546 | orchestrator | Monday 02 June 2025 19:59:24 +0000 (0:00:00.342) 0:00:38.116 *********** 2025-06-02 20:01:03.862557 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:03.862568 | orchestrator | 2025-06-02 20:01:03.862578 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 20:01:03.862589 | orchestrator | Monday 02 June 2025 19:59:26 +0000 (0:00:02.146) 0:00:40.263 *********** 2025-06-02 20:01:03.862600 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:03.862610 | orchestrator | 2025-06-02 20:01:03.862621 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 20:01:03.862632 | orchestrator | 2025-06-02 20:01:03.862643 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 20:01:03.862654 | orchestrator | Monday 02 June 2025 20:00:22 +0000 (0:00:56.089) 0:01:36.352 *********** 2025-06-02 20:01:03.862664 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:03.862675 | orchestrator | 2025-06-02 20:01:03.862686 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 20:01:03.862697 | orchestrator | Monday 02 June 2025 20:00:23 +0000 (0:00:00.603) 0:01:36.956 *********** 2025-06-02 20:01:03.862708 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:03.862718 | orchestrator | 2025-06-02 20:01:03.862729 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 20:01:03.862740 | orchestrator | Monday 02 June 2025 20:00:23 +0000 (0:00:00.435) 0:01:37.391 *********** 2025-06-02 20:01:03.862759 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:03.862770 | orchestrator | 2025-06-02 20:01:03.862781 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 20:01:03.862792 | orchestrator | Monday 02 June 2025 20:00:26 +0000 (0:00:02.301) 0:01:39.692 *********** 2025-06-02 20:01:03.862802 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:03.862813 | orchestrator | 2025-06-02 20:01:03.862823 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 20:01:03.862832 | orchestrator | 2025-06-02 20:01:03.862842 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 20:01:03.862861 | orchestrator | Monday 02 June 2025 20:00:40 +0000 (0:00:14.228) 0:01:53.921 *********** 2025-06-02 20:01:03.862871 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:03.862881 | orchestrator | 2025-06-02 20:01:03.862890 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 20:01:03.862900 | orchestrator | Monday 02 June 2025 20:00:40 +0000 (0:00:00.565) 0:01:54.486 *********** 2025-06-02 20:01:03.862909 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:03.862918 | orchestrator | 2025-06-02 20:01:03.862928 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 20:01:03.862937 | orchestrator | Monday 02 June 2025 20:00:41 +0000 (0:00:00.346) 0:01:54.833 *********** 2025-06-02 20:01:03.862947 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:03.862957 | orchestrator | 2025-06-02 20:01:03.862966 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 20:01:03.862976 | orchestrator | Monday 02 June 2025 20:00:47 +0000 (0:00:06.576) 0:02:01.409 *********** 2025-06-02 20:01:03.862985 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:03.862994 | orchestrator | 2025-06-02 20:01:03.863004 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-02 20:01:03.863013 | orchestrator | 2025-06-02 20:01:03.863023 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-02 20:01:03.863032 | orchestrator | Monday 02 June 2025 20:00:58 +0000 (0:00:10.286) 0:02:11.696 *********** 2025-06-02 20:01:03.863042 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:01:03.863051 | orchestrator | 2025-06-02 20:01:03.863065 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-02 20:01:03.863075 | orchestrator | Monday 02 June 2025 20:00:58 +0000 (0:00:00.734) 0:02:12.431 *********** 2025-06-02 20:01:03.863085 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 20:01:03.863094 | orchestrator | enable_outward_rabbitmq_True 2025-06-02 20:01:03.863103 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 20:01:03.863113 | orchestrator | outward_rabbitmq_restart 2025-06-02 20:01:03.863122 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:03.863132 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:03.863141 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:03.863151 | orchestrator | 2025-06-02 20:01:03.863160 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-02 20:01:03.863170 | orchestrator | skipping: no hosts matched 2025-06-02 20:01:03.863179 | orchestrator | 2025-06-02 20:01:03.863189 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-02 20:01:03.863198 | orchestrator | skipping: no hosts matched 2025-06-02 20:01:03.863208 | orchestrator | 2025-06-02 20:01:03.863217 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-02 20:01:03.863227 | orchestrator | skipping: no hosts matched 2025-06-02 20:01:03.863253 | orchestrator | 2025-06-02 20:01:03.863263 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:01:03.863273 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 20:01:03.863283 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 20:01:03.863299 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:01:03.863309 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:01:03.863318 | orchestrator | 2025-06-02 20:01:03.863328 | orchestrator | 2025-06-02 20:01:03.863338 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:01:03.863347 | orchestrator | Monday 02 June 2025 20:01:01 +0000 (0:00:02.537) 0:02:14.968 *********** 2025-06-02 20:01:03.863357 | orchestrator | =============================================================================== 2025-06-02 20:01:03.863366 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.60s 2025-06-02 20:01:03.863375 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.02s 2025-06-02 20:01:03.863385 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.86s 2025-06-02 20:01:03.863394 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.11s 2025-06-02 20:01:03.863404 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.54s 2025-06-02 20:01:03.863413 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.46s 2025-06-02 20:01:03.863423 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.00s 2025-06-02 20:01:03.863432 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.90s 2025-06-02 20:01:03.863442 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.82s 2025-06-02 20:01:03.863451 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.71s 2025-06-02 20:01:03.863461 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.64s 2025-06-02 20:01:03.863470 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.63s 2025-06-02 20:01:03.863479 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.56s 2025-06-02 20:01:03.863491 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.47s 2025-06-02 20:01:03.863508 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.47s 2025-06-02 20:01:03.863532 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.34s 2025-06-02 20:01:03.863547 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.24s 2025-06-02 20:01:03.863563 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.22s 2025-06-02 20:01:03.863579 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.16s 2025-06-02 20:01:03.863595 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.12s 2025-06-02 20:01:03.863610 | orchestrator | 2025-06-02 20:01:03 | INFO  | Task 9a1f4e8c-0c35-4ae6-b043-2ed60dabadb7 is in state SUCCESS 2025-06-02 20:01:03.863627 | orchestrator | 2025-06-02 20:01:03 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:03.863642 | orchestrator | 2025-06-02 20:01:03 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:03.863655 | orchestrator | 2025-06-02 20:01:03 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:03.863671 | orchestrator | 2025-06-02 20:01:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:06.890137 | orchestrator | 2025-06-02 20:01:06 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:06.890342 | orchestrator | 2025-06-02 20:01:06 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:06.890805 | orchestrator | 2025-06-02 20:01:06 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:06.890835 | orchestrator | 2025-06-02 20:01:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:09.937906 | orchestrator | 2025-06-02 20:01:09 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:09.939360 | orchestrator | 2025-06-02 20:01:09 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:09.940772 | orchestrator | 2025-06-02 20:01:09 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:09.940898 | orchestrator | 2025-06-02 20:01:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:12.981923 | orchestrator | 2025-06-02 20:01:12 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:12.983426 | orchestrator | 2025-06-02 20:01:12 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:12.989558 | orchestrator | 2025-06-02 20:01:12 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:12.989632 | orchestrator | 2025-06-02 20:01:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:16.041435 | orchestrator | 2025-06-02 20:01:16 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:16.045752 | orchestrator | 2025-06-02 20:01:16 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:16.048723 | orchestrator | 2025-06-02 20:01:16 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:16.048999 | orchestrator | 2025-06-02 20:01:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:19.089185 | orchestrator | 2025-06-02 20:01:19 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:19.091042 | orchestrator | 2025-06-02 20:01:19 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:19.093490 | orchestrator | 2025-06-02 20:01:19 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:19.093626 | orchestrator | 2025-06-02 20:01:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:22.118214 | orchestrator | 2025-06-02 20:01:22 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:22.120439 | orchestrator | 2025-06-02 20:01:22 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:22.122206 | orchestrator | 2025-06-02 20:01:22 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:22.122289 | orchestrator | 2025-06-02 20:01:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:25.157959 | orchestrator | 2025-06-02 20:01:25 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:25.158183 | orchestrator | 2025-06-02 20:01:25 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:25.158207 | orchestrator | 2025-06-02 20:01:25 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:25.158319 | orchestrator | 2025-06-02 20:01:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:28.201456 | orchestrator | 2025-06-02 20:01:28 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:28.203001 | orchestrator | 2025-06-02 20:01:28 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:28.205481 | orchestrator | 2025-06-02 20:01:28 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:28.205584 | orchestrator | 2025-06-02 20:01:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:31.247905 | orchestrator | 2025-06-02 20:01:31 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:31.249685 | orchestrator | 2025-06-02 20:01:31 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:31.250765 | orchestrator | 2025-06-02 20:01:31 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:31.250807 | orchestrator | 2025-06-02 20:01:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:34.305064 | orchestrator | 2025-06-02 20:01:34 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:34.306325 | orchestrator | 2025-06-02 20:01:34 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:34.307864 | orchestrator | 2025-06-02 20:01:34 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:34.307906 | orchestrator | 2025-06-02 20:01:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:37.351503 | orchestrator | 2025-06-02 20:01:37 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:37.351608 | orchestrator | 2025-06-02 20:01:37 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:37.352856 | orchestrator | 2025-06-02 20:01:37 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:37.354390 | orchestrator | 2025-06-02 20:01:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:40.393725 | orchestrator | 2025-06-02 20:01:40 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:40.395015 | orchestrator | 2025-06-02 20:01:40 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:40.395958 | orchestrator | 2025-06-02 20:01:40 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:40.396004 | orchestrator | 2025-06-02 20:01:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:43.438875 | orchestrator | 2025-06-02 20:01:43 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:43.439777 | orchestrator | 2025-06-02 20:01:43 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:43.440335 | orchestrator | 2025-06-02 20:01:43 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:43.440372 | orchestrator | 2025-06-02 20:01:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:46.490623 | orchestrator | 2025-06-02 20:01:46 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:46.490849 | orchestrator | 2025-06-02 20:01:46 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:46.491985 | orchestrator | 2025-06-02 20:01:46 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:46.492110 | orchestrator | 2025-06-02 20:01:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:49.540955 | orchestrator | 2025-06-02 20:01:49 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:49.541044 | orchestrator | 2025-06-02 20:01:49 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:49.541265 | orchestrator | 2025-06-02 20:01:49 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:49.541284 | orchestrator | 2025-06-02 20:01:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:52.583346 | orchestrator | 2025-06-02 20:01:52 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state STARTED 2025-06-02 20:01:52.583967 | orchestrator | 2025-06-02 20:01:52 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:52.585739 | orchestrator | 2025-06-02 20:01:52 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:52.585907 | orchestrator | 2025-06-02 20:01:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:55.635342 | orchestrator | 2025-06-02 20:01:55.635450 | orchestrator | 2025-06-02 20:01:55.635516 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:01:55.635529 | orchestrator | 2025-06-02 20:01:55.635540 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:01:55.635546 | orchestrator | Monday 02 June 2025 19:59:29 +0000 (0:00:00.181) 0:00:00.181 *********** 2025-06-02 20:01:55.635553 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.635559 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.635566 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.635609 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:01:55.635616 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:01:55.635622 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:01:55.635628 | orchestrator | 2025-06-02 20:01:55.635635 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:01:55.635641 | orchestrator | Monday 02 June 2025 19:59:29 +0000 (0:00:00.668) 0:00:00.850 *********** 2025-06-02 20:01:55.635647 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-02 20:01:55.635654 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-02 20:01:55.635660 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-02 20:01:55.635666 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-02 20:01:55.635672 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-02 20:01:55.635678 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-02 20:01:55.635684 | orchestrator | 2025-06-02 20:01:55.635700 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-02 20:01:55.635707 | orchestrator | 2025-06-02 20:01:55.635713 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-02 20:01:55.635719 | orchestrator | Monday 02 June 2025 19:59:30 +0000 (0:00:00.932) 0:00:01.782 *********** 2025-06-02 20:01:55.635726 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:01:55.635733 | orchestrator | 2025-06-02 20:01:55.635739 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-02 20:01:55.635745 | orchestrator | Monday 02 June 2025 19:59:32 +0000 (0:00:01.370) 0:00:03.153 *********** 2025-06-02 20:01:55.635752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635872 | orchestrator | 2025-06-02 20:01:55.635886 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-02 20:01:55.635897 | orchestrator | Monday 02 June 2025 19:59:33 +0000 (0:00:01.763) 0:00:04.917 *********** 2025-06-02 20:01:55.635904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635952 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.635959 | orchestrator | 2025-06-02 20:01:55.635967 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-02 20:01:55.635976 | orchestrator | Monday 02 June 2025 19:59:36 +0000 (0:00:02.404) 0:00:07.321 *********** 2025-06-02 20:01:55.635991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636034 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636073 | orchestrator | 2025-06-02 20:01:55.636084 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-02 20:01:55.636095 | orchestrator | Monday 02 June 2025 19:59:37 +0000 (0:00:01.222) 0:00:08.544 *********** 2025-06-02 20:01:55.636113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636158 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636187 | orchestrator | 2025-06-02 20:01:55.636197 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-02 20:01:55.636207 | orchestrator | Monday 02 June 2025 19:59:39 +0000 (0:00:01.735) 0:00:10.279 *********** 2025-06-02 20:01:55.636237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636253 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636293 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.636314 | orchestrator | 2025-06-02 20:01:55.636325 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-02 20:01:55.636335 | orchestrator | Monday 02 June 2025 19:59:40 +0000 (0:00:01.517) 0:00:11.796 *********** 2025-06-02 20:01:55.636345 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:55.636356 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:55.636366 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:01:55.636376 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:01:55.636386 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:55.636396 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:01:55.636406 | orchestrator | 2025-06-02 20:01:55.636416 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-02 20:01:55.636426 | orchestrator | Monday 02 June 2025 19:59:43 +0000 (0:00:02.589) 0:00:14.386 *********** 2025-06-02 20:01:55.636436 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-02 20:01:55.636445 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-02 20:01:55.636455 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-02 20:01:55.636474 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-02 20:01:55.636485 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-02 20:01:55.636495 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-02 20:01:55.636505 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:01:55.636516 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:01:55.636526 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:01:55.636537 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:01:55.636548 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:01:55.636563 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 20:01:55.636569 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:01:55.636581 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:01:55.636587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:01:55.636593 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:01:55.636599 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:01:55.636605 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 20:01:55.636611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:01:55.636618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:01:55.636624 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:01:55.636630 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:01:55.636636 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:01:55.636642 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 20:01:55.636648 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:01:55.636654 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:01:55.636660 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:01:55.636666 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:01:55.636672 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:01:55.636678 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 20:01:55.636684 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:01:55.636690 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:01:55.636697 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:01:55.636702 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:01:55.636708 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:01:55.636715 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 20:01:55.636721 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 20:01:55.636727 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 20:01:55.636733 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 20:01:55.636739 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 20:01:55.636753 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 20:01:55.636759 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-02 20:01:55.636765 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 20:01:55.636771 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-02 20:01:55.636778 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-02 20:01:55.636784 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-02 20:01:55.636790 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-02 20:01:55.636796 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-02 20:01:55.636805 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 20:01:55.636811 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 20:01:55.636817 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 20:01:55.636823 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 20:01:55.636829 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 20:01:55.636835 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 20:01:55.636841 | orchestrator | 2025-06-02 20:01:55.636847 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:01:55.636853 | orchestrator | Monday 02 June 2025 20:00:05 +0000 (0:00:21.761) 0:00:36.147 *********** 2025-06-02 20:01:55.636859 | orchestrator | 2025-06-02 20:01:55.636865 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:01:55.636871 | orchestrator | Monday 02 June 2025 20:00:05 +0000 (0:00:00.091) 0:00:36.239 *********** 2025-06-02 20:01:55.636877 | orchestrator | 2025-06-02 20:01:55.636883 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:01:55.636889 | orchestrator | Monday 02 June 2025 20:00:05 +0000 (0:00:00.108) 0:00:36.347 *********** 2025-06-02 20:01:55.636895 | orchestrator | 2025-06-02 20:01:55.636901 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:01:55.636907 | orchestrator | Monday 02 June 2025 20:00:05 +0000 (0:00:00.089) 0:00:36.437 *********** 2025-06-02 20:01:55.636914 | orchestrator | 2025-06-02 20:01:55.636920 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:01:55.636926 | orchestrator | Monday 02 June 2025 20:00:05 +0000 (0:00:00.085) 0:00:36.522 *********** 2025-06-02 20:01:55.636932 | orchestrator | 2025-06-02 20:01:55.636938 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 20:01:55.636944 | orchestrator | Monday 02 June 2025 20:00:05 +0000 (0:00:00.087) 0:00:36.610 *********** 2025-06-02 20:01:55.636950 | orchestrator | 2025-06-02 20:01:55.636956 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-02 20:01:55.636961 | orchestrator | Monday 02 June 2025 20:00:05 +0000 (0:00:00.065) 0:00:36.675 *********** 2025-06-02 20:01:55.636967 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:01:55.636977 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:01:55.636983 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.636989 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:01:55.636995 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.637001 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.637007 | orchestrator | 2025-06-02 20:01:55.637013 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-02 20:01:55.637019 | orchestrator | Monday 02 June 2025 20:00:07 +0000 (0:00:01.902) 0:00:38.578 *********** 2025-06-02 20:01:55.637025 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:55.637031 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:55.637037 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:01:55.637043 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:01:55.637049 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:01:55.637055 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:55.637061 | orchestrator | 2025-06-02 20:01:55.637067 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-02 20:01:55.637073 | orchestrator | 2025-06-02 20:01:55.637079 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 20:01:55.637085 | orchestrator | Monday 02 June 2025 20:00:36 +0000 (0:00:28.683) 0:01:07.262 *********** 2025-06-02 20:01:55.637091 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:01:55.637098 | orchestrator | 2025-06-02 20:01:55.637104 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 20:01:55.637110 | orchestrator | Monday 02 June 2025 20:00:37 +0000 (0:00:01.157) 0:01:08.419 *********** 2025-06-02 20:01:55.637116 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:01:55.637122 | orchestrator | 2025-06-02 20:01:55.637131 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-02 20:01:55.637137 | orchestrator | Monday 02 June 2025 20:00:38 +0000 (0:00:01.006) 0:01:09.425 *********** 2025-06-02 20:01:55.637143 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.637150 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.637156 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.637162 | orchestrator | 2025-06-02 20:01:55.637168 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-02 20:01:55.637174 | orchestrator | Monday 02 June 2025 20:00:39 +0000 (0:00:01.057) 0:01:10.483 *********** 2025-06-02 20:01:55.637179 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.637185 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.637191 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.637197 | orchestrator | 2025-06-02 20:01:55.637204 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-02 20:01:55.637241 | orchestrator | Monday 02 June 2025 20:00:39 +0000 (0:00:00.279) 0:01:10.763 *********** 2025-06-02 20:01:55.637250 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.637256 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.637262 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.637268 | orchestrator | 2025-06-02 20:01:55.637274 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-02 20:01:55.637280 | orchestrator | Monday 02 June 2025 20:00:40 +0000 (0:00:00.282) 0:01:11.045 *********** 2025-06-02 20:01:55.637286 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.637292 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.637303 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.637309 | orchestrator | 2025-06-02 20:01:55.637315 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-02 20:01:55.637321 | orchestrator | Monday 02 June 2025 20:00:40 +0000 (0:00:00.694) 0:01:11.740 *********** 2025-06-02 20:01:55.637327 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.637333 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.637339 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.637345 | orchestrator | 2025-06-02 20:01:55.637360 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-02 20:01:55.637367 | orchestrator | Monday 02 June 2025 20:00:41 +0000 (0:00:00.334) 0:01:12.074 *********** 2025-06-02 20:01:55.637373 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.637379 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.637385 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.637391 | orchestrator | 2025-06-02 20:01:55.637397 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-02 20:01:55.637403 | orchestrator | Monday 02 June 2025 20:00:41 +0000 (0:00:00.380) 0:01:12.455 *********** 2025-06-02 20:01:55.637409 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.637415 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.637421 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.637427 | orchestrator | 2025-06-02 20:01:55.637433 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-02 20:01:55.637440 | orchestrator | Monday 02 June 2025 20:00:41 +0000 (0:00:00.371) 0:01:12.827 *********** 2025-06-02 20:01:55.637446 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.637452 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.637459 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.637471 | orchestrator | 2025-06-02 20:01:55.637482 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-02 20:01:55.637493 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:00.373) 0:01:13.200 *********** 2025-06-02 20:01:55.637505 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.637516 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.637528 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.637539 | orchestrator | 2025-06-02 20:01:55.637551 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-02 20:01:55.637564 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:00.328) 0:01:13.529 *********** 2025-06-02 20:01:55.637576 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.637588 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.637600 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.637612 | orchestrator | 2025-06-02 20:01:55.637623 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-02 20:01:55.637635 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:00.301) 0:01:13.830 *********** 2025-06-02 20:01:55.637646 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.637657 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.637669 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.637681 | orchestrator | 2025-06-02 20:01:55.637693 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-02 20:01:55.637705 | orchestrator | Monday 02 June 2025 20:00:43 +0000 (0:00:00.309) 0:01:14.140 *********** 2025-06-02 20:01:55.637717 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.637729 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.637740 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.637752 | orchestrator | 2025-06-02 20:01:55.637764 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-02 20:01:55.637777 | orchestrator | Monday 02 June 2025 20:00:43 +0000 (0:00:00.695) 0:01:14.835 *********** 2025-06-02 20:01:55.637789 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.637800 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.637811 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.637822 | orchestrator | 2025-06-02 20:01:55.637832 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-02 20:01:55.637842 | orchestrator | Monday 02 June 2025 20:00:44 +0000 (0:00:00.534) 0:01:15.370 *********** 2025-06-02 20:01:55.637853 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.637863 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.637874 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.637885 | orchestrator | 2025-06-02 20:01:55.637896 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-02 20:01:55.637912 | orchestrator | Monday 02 June 2025 20:00:44 +0000 (0:00:00.319) 0:01:15.690 *********** 2025-06-02 20:01:55.637924 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.637934 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.637945 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.637954 | orchestrator | 2025-06-02 20:01:55.637970 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-02 20:01:55.637982 | orchestrator | Monday 02 June 2025 20:00:45 +0000 (0:00:00.441) 0:01:16.131 *********** 2025-06-02 20:01:55.637992 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.638003 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.638052 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.638066 | orchestrator | 2025-06-02 20:01:55.638077 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-02 20:01:55.638084 | orchestrator | Monday 02 June 2025 20:00:45 +0000 (0:00:00.525) 0:01:16.657 *********** 2025-06-02 20:01:55.638090 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.638096 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.638102 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.638108 | orchestrator | 2025-06-02 20:01:55.638114 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 20:01:55.638121 | orchestrator | Monday 02 June 2025 20:00:45 +0000 (0:00:00.289) 0:01:16.946 *********** 2025-06-02 20:01:55.638127 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:01:55.638133 | orchestrator | 2025-06-02 20:01:55.638139 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-02 20:01:55.638146 | orchestrator | Monday 02 June 2025 20:00:46 +0000 (0:00:00.543) 0:01:17.490 *********** 2025-06-02 20:01:55.638152 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.638162 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.638169 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.638175 | orchestrator | 2025-06-02 20:01:55.638181 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-02 20:01:55.638187 | orchestrator | Monday 02 June 2025 20:00:47 +0000 (0:00:00.696) 0:01:18.186 *********** 2025-06-02 20:01:55.638193 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.638199 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.638206 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.638247 | orchestrator | 2025-06-02 20:01:55.638255 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-02 20:01:55.638262 | orchestrator | Monday 02 June 2025 20:00:47 +0000 (0:00:00.656) 0:01:18.842 *********** 2025-06-02 20:01:55.638268 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.638274 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.638280 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.638286 | orchestrator | 2025-06-02 20:01:55.638292 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-02 20:01:55.638299 | orchestrator | Monday 02 June 2025 20:00:48 +0000 (0:00:00.466) 0:01:19.310 *********** 2025-06-02 20:01:55.638305 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.638311 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.638317 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.638323 | orchestrator | 2025-06-02 20:01:55.638329 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-02 20:01:55.638335 | orchestrator | Monday 02 June 2025 20:00:48 +0000 (0:00:00.562) 0:01:19.873 *********** 2025-06-02 20:01:55.638342 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.638348 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.638354 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.638360 | orchestrator | 2025-06-02 20:01:55.638366 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-02 20:01:55.638372 | orchestrator | Monday 02 June 2025 20:00:49 +0000 (0:00:00.440) 0:01:20.314 *********** 2025-06-02 20:01:55.638383 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.638389 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.638396 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.638402 | orchestrator | 2025-06-02 20:01:55.638408 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-02 20:01:55.638414 | orchestrator | Monday 02 June 2025 20:00:49 +0000 (0:00:00.270) 0:01:20.584 *********** 2025-06-02 20:01:55.638420 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.638426 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.638432 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.638438 | orchestrator | 2025-06-02 20:01:55.638444 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-02 20:01:55.638450 | orchestrator | Monday 02 June 2025 20:00:49 +0000 (0:00:00.254) 0:01:20.838 *********** 2025-06-02 20:01:55.638456 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.638462 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.638468 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.638474 | orchestrator | 2025-06-02 20:01:55.638480 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 20:01:55.638487 | orchestrator | Monday 02 June 2025 20:00:50 +0000 (0:00:00.430) 0:01:21.269 *********** 2025-06-02 20:01:55.638494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/k2025-06-02 20:01:55 | INFO  | Task 85c91c73-14c3-4551-978b-a0a75100d21a is in state SUCCESS 2025-06-02 20:01:55.638646 | orchestrator | olla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638895 | orchestrator | 2025-06-02 20:01:55.638907 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 20:01:55.638918 | orchestrator | Monday 02 June 2025 20:00:51 +0000 (0:00:01.524) 0:01:22.793 *********** 2025-06-02 20:01:55.638930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.638994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639106 | orchestrator | 2025-06-02 20:01:55.639118 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 20:01:55.639130 | orchestrator | Monday 02 June 2025 20:00:56 +0000 (0:00:04.811) 0:01:27.604 *********** 2025-06-02 20:01:55.639150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.639417 | orchestrator | 2025-06-02 20:01:55.639439 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:01:55.639460 | orchestrator | Monday 02 June 2025 20:00:58 +0000 (0:00:02.130) 0:01:29.735 *********** 2025-06-02 20:01:55.639476 | orchestrator | 2025-06-02 20:01:55.639490 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:01:55.639508 | orchestrator | Monday 02 June 2025 20:00:58 +0000 (0:00:00.131) 0:01:29.866 *********** 2025-06-02 20:01:55.639539 | orchestrator | 2025-06-02 20:01:55.639559 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:01:55.639577 | orchestrator | Monday 02 June 2025 20:00:58 +0000 (0:00:00.137) 0:01:30.004 *********** 2025-06-02 20:01:55.639595 | orchestrator | 2025-06-02 20:01:55.639614 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 20:01:55.639633 | orchestrator | Monday 02 June 2025 20:00:59 +0000 (0:00:00.075) 0:01:30.079 *********** 2025-06-02 20:01:55.639652 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:55.639672 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:55.639692 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:55.639711 | orchestrator | 2025-06-02 20:01:55.639724 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 20:01:55.639736 | orchestrator | Monday 02 June 2025 20:01:06 +0000 (0:00:07.505) 0:01:37.585 *********** 2025-06-02 20:01:55.639747 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:55.639757 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:55.639768 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:55.639779 | orchestrator | 2025-06-02 20:01:55.639790 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 20:01:55.639801 | orchestrator | Monday 02 June 2025 20:01:14 +0000 (0:00:07.819) 0:01:45.405 *********** 2025-06-02 20:01:55.639812 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:55.639822 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:55.639833 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:55.639844 | orchestrator | 2025-06-02 20:01:55.639854 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 20:01:55.639865 | orchestrator | Monday 02 June 2025 20:01:16 +0000 (0:00:02.551) 0:01:47.957 *********** 2025-06-02 20:01:55.639876 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.639886 | orchestrator | 2025-06-02 20:01:55.639897 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 20:01:55.639918 | orchestrator | Monday 02 June 2025 20:01:17 +0000 (0:00:00.114) 0:01:48.071 *********** 2025-06-02 20:01:55.639929 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.639940 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.639951 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.639961 | orchestrator | 2025-06-02 20:01:55.639984 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 20:01:55.640026 | orchestrator | Monday 02 June 2025 20:01:17 +0000 (0:00:00.741) 0:01:48.812 *********** 2025-06-02 20:01:55.640039 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.640049 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.640060 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:55.640071 | orchestrator | 2025-06-02 20:01:55.640082 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 20:01:55.640093 | orchestrator | Monday 02 June 2025 20:01:18 +0000 (0:00:00.724) 0:01:49.537 *********** 2025-06-02 20:01:55.640104 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.640115 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.640126 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.640137 | orchestrator | 2025-06-02 20:01:55.640148 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 20:01:55.640159 | orchestrator | Monday 02 June 2025 20:01:19 +0000 (0:00:00.795) 0:01:50.333 *********** 2025-06-02 20:01:55.640169 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.640180 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.640191 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:55.640233 | orchestrator | 2025-06-02 20:01:55.640250 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 20:01:55.640262 | orchestrator | Monday 02 June 2025 20:01:19 +0000 (0:00:00.606) 0:01:50.940 *********** 2025-06-02 20:01:55.640278 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.640289 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.640300 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.640311 | orchestrator | 2025-06-02 20:01:55.640322 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 20:01:55.640333 | orchestrator | Monday 02 June 2025 20:01:20 +0000 (0:00:00.697) 0:01:51.637 *********** 2025-06-02 20:01:55.640344 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.640355 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.640365 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.640376 | orchestrator | 2025-06-02 20:01:55.640387 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-02 20:01:55.640398 | orchestrator | Monday 02 June 2025 20:01:21 +0000 (0:00:00.986) 0:01:52.624 *********** 2025-06-02 20:01:55.640408 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.640419 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.640429 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.640440 | orchestrator | 2025-06-02 20:01:55.640451 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 20:01:55.640462 | orchestrator | Monday 02 June 2025 20:01:21 +0000 (0:00:00.266) 0:01:52.891 *********** 2025-06-02 20:01:55.640473 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640485 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640497 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640518 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640529 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640541 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640561 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640573 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640588 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640600 | orchestrator | 2025-06-02 20:01:55.640610 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 20:01:55.640622 | orchestrator | Monday 02 June 2025 20:01:23 +0000 (0:00:01.363) 0:01:54.254 *********** 2025-06-02 20:01:55.640633 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640645 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640662 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640685 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640726 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640755 | orchestrator | 2025-06-02 20:01:55.640766 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 20:01:55.640776 | orchestrator | Monday 02 June 2025 20:01:27 +0000 (0:00:04.230) 0:01:58.485 *********** 2025-06-02 20:01:55.640787 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640798 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640816 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640839 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640892 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:01:55.640903 | orchestrator | 2025-06-02 20:01:55.640913 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:01:55.640924 | orchestrator | Monday 02 June 2025 20:01:30 +0000 (0:00:03.084) 0:02:01.570 *********** 2025-06-02 20:01:55.640935 | orchestrator | 2025-06-02 20:01:55.640951 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:01:55.640961 | orchestrator | Monday 02 June 2025 20:01:30 +0000 (0:00:00.064) 0:02:01.634 *********** 2025-06-02 20:01:55.640972 | orchestrator | 2025-06-02 20:01:55.640983 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 20:01:55.640994 | orchestrator | Monday 02 June 2025 20:01:30 +0000 (0:00:00.065) 0:02:01.699 *********** 2025-06-02 20:01:55.641005 | orchestrator | 2025-06-02 20:01:55.641015 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 20:01:55.641038 | orchestrator | Monday 02 June 2025 20:01:30 +0000 (0:00:00.063) 0:02:01.763 *********** 2025-06-02 20:01:55.641051 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:55.641070 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:55.641087 | orchestrator | 2025-06-02 20:01:55.641105 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 20:01:55.641125 | orchestrator | Monday 02 June 2025 20:01:36 +0000 (0:00:06.219) 0:02:07.982 *********** 2025-06-02 20:01:55.641146 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:55.641164 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:55.641179 | orchestrator | 2025-06-02 20:01:55.641190 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 20:01:55.641200 | orchestrator | Monday 02 June 2025 20:01:43 +0000 (0:00:06.165) 0:02:14.148 *********** 2025-06-02 20:01:55.641260 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:01:55.641276 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:01:55.641287 | orchestrator | 2025-06-02 20:01:55.641298 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 20:01:55.641309 | orchestrator | Monday 02 June 2025 20:01:49 +0000 (0:00:06.193) 0:02:20.342 *********** 2025-06-02 20:01:55.641320 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:01:55.641330 | orchestrator | 2025-06-02 20:01:55.641341 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 20:01:55.641352 | orchestrator | Monday 02 June 2025 20:01:49 +0000 (0:00:00.103) 0:02:20.446 *********** 2025-06-02 20:01:55.641363 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.641374 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.641384 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.641395 | orchestrator | 2025-06-02 20:01:55.641407 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 20:01:55.641418 | orchestrator | Monday 02 June 2025 20:01:50 +0000 (0:00:00.863) 0:02:21.309 *********** 2025-06-02 20:01:55.641428 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.641440 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.641451 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:55.641462 | orchestrator | 2025-06-02 20:01:55.641473 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 20:01:55.641484 | orchestrator | Monday 02 June 2025 20:01:50 +0000 (0:00:00.616) 0:02:21.925 *********** 2025-06-02 20:01:55.641495 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.641506 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.641517 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.641528 | orchestrator | 2025-06-02 20:01:55.641539 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 20:01:55.641550 | orchestrator | Monday 02 June 2025 20:01:51 +0000 (0:00:00.737) 0:02:22.662 *********** 2025-06-02 20:01:55.641561 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:01:55.641571 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:01:55.641582 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:01:55.641593 | orchestrator | 2025-06-02 20:01:55.641603 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 20:01:55.641614 | orchestrator | Monday 02 June 2025 20:01:52 +0000 (0:00:00.527) 0:02:23.190 *********** 2025-06-02 20:01:55.641625 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.641636 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.641647 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.641657 | orchestrator | 2025-06-02 20:01:55.641668 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 20:01:55.641679 | orchestrator | Monday 02 June 2025 20:01:53 +0000 (0:00:00.870) 0:02:24.061 *********** 2025-06-02 20:01:55.641689 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:01:55.641700 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:01:55.641711 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:01:55.641722 | orchestrator | 2025-06-02 20:01:55.641733 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:01:55.641754 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 20:01:55.641767 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 20:01:55.641787 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 20:01:55.641799 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:01:55.641810 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:01:55.641822 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:01:55.641832 | orchestrator | 2025-06-02 20:01:55.641843 | orchestrator | 2025-06-02 20:01:55.641854 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:01:55.641866 | orchestrator | Monday 02 June 2025 20:01:53 +0000 (0:00:00.733) 0:02:24.794 *********** 2025-06-02 20:01:55.641876 | orchestrator | =============================================================================== 2025-06-02 20:01:55.641893 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 28.68s 2025-06-02 20:01:55.641904 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.76s 2025-06-02 20:01:55.641915 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.99s 2025-06-02 20:01:55.641926 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.73s 2025-06-02 20:01:55.641937 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.75s 2025-06-02 20:01:55.641948 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.81s 2025-06-02 20:01:55.641959 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.23s 2025-06-02 20:01:55.641969 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.08s 2025-06-02 20:01:55.641981 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.59s 2025-06-02 20:01:55.641991 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.40s 2025-06-02 20:01:55.642002 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.13s 2025-06-02 20:01:55.642013 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.90s 2025-06-02 20:01:55.642123 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.76s 2025-06-02 20:01:55.642136 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.74s 2025-06-02 20:01:55.642148 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.52s 2025-06-02 20:01:55.642159 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.52s 2025-06-02 20:01:55.642170 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.37s 2025-06-02 20:01:55.642181 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.36s 2025-06-02 20:01:55.642195 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.22s 2025-06-02 20:01:55.642232 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.16s 2025-06-02 20:01:55.642255 | orchestrator | 2025-06-02 20:01:55 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:55.642274 | orchestrator | 2025-06-02 20:01:55 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:55.642293 | orchestrator | 2025-06-02 20:01:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:01:58.681503 | orchestrator | 2025-06-02 20:01:58 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:01:58.683558 | orchestrator | 2025-06-02 20:01:58 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:01:58.683985 | orchestrator | 2025-06-02 20:01:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:01.746399 | orchestrator | 2025-06-02 20:02:01 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:01.748449 | orchestrator | 2025-06-02 20:02:01 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:01.748621 | orchestrator | 2025-06-02 20:02:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:04.789101 | orchestrator | 2025-06-02 20:02:04 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:04.789701 | orchestrator | 2025-06-02 20:02:04 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:04.789753 | orchestrator | 2025-06-02 20:02:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:07.850290 | orchestrator | 2025-06-02 20:02:07 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:07.850630 | orchestrator | 2025-06-02 20:02:07 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:07.850806 | orchestrator | 2025-06-02 20:02:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:10.910773 | orchestrator | 2025-06-02 20:02:10 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:10.911432 | orchestrator | 2025-06-02 20:02:10 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:10.911470 | orchestrator | 2025-06-02 20:02:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:13.953530 | orchestrator | 2025-06-02 20:02:13 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:13.953898 | orchestrator | 2025-06-02 20:02:13 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:13.953939 | orchestrator | 2025-06-02 20:02:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:17.006744 | orchestrator | 2025-06-02 20:02:17 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:17.007807 | orchestrator | 2025-06-02 20:02:17 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:17.007889 | orchestrator | 2025-06-02 20:02:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:20.052454 | orchestrator | 2025-06-02 20:02:20 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:20.053973 | orchestrator | 2025-06-02 20:02:20 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:20.054090 | orchestrator | 2025-06-02 20:02:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:23.097520 | orchestrator | 2025-06-02 20:02:23 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:23.098369 | orchestrator | 2025-06-02 20:02:23 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:23.098399 | orchestrator | 2025-06-02 20:02:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:26.152597 | orchestrator | 2025-06-02 20:02:26 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:26.153125 | orchestrator | 2025-06-02 20:02:26 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:26.153169 | orchestrator | 2025-06-02 20:02:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:29.202936 | orchestrator | 2025-06-02 20:02:29 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:29.202990 | orchestrator | 2025-06-02 20:02:29 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:29.202996 | orchestrator | 2025-06-02 20:02:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:32.245473 | orchestrator | 2025-06-02 20:02:32 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:32.246272 | orchestrator | 2025-06-02 20:02:32 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:32.246304 | orchestrator | 2025-06-02 20:02:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:35.278902 | orchestrator | 2025-06-02 20:02:35 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:35.279264 | orchestrator | 2025-06-02 20:02:35 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:35.279664 | orchestrator | 2025-06-02 20:02:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:38.317823 | orchestrator | 2025-06-02 20:02:38 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:38.318327 | orchestrator | 2025-06-02 20:02:38 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:38.318379 | orchestrator | 2025-06-02 20:02:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:41.358905 | orchestrator | 2025-06-02 20:02:41 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:41.359838 | orchestrator | 2025-06-02 20:02:41 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:41.360440 | orchestrator | 2025-06-02 20:02:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:44.408500 | orchestrator | 2025-06-02 20:02:44 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:44.408646 | orchestrator | 2025-06-02 20:02:44 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:44.408663 | orchestrator | 2025-06-02 20:02:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:47.454843 | orchestrator | 2025-06-02 20:02:47 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:47.455007 | orchestrator | 2025-06-02 20:02:47 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:47.455550 | orchestrator | 2025-06-02 20:02:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:50.489475 | orchestrator | 2025-06-02 20:02:50 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:50.489639 | orchestrator | 2025-06-02 20:02:50 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:50.489961 | orchestrator | 2025-06-02 20:02:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:53.542532 | orchestrator | 2025-06-02 20:02:53 | INFO  | Task 60f9e0f2-1b5b-4008-93d7-a94bcca79c02 is in state STARTED 2025-06-02 20:02:53.547650 | orchestrator | 2025-06-02 20:02:53 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:53.547752 | orchestrator | 2025-06-02 20:02:53 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:53.547761 | orchestrator | 2025-06-02 20:02:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:56.594678 | orchestrator | 2025-06-02 20:02:56 | INFO  | Task 60f9e0f2-1b5b-4008-93d7-a94bcca79c02 is in state STARTED 2025-06-02 20:02:56.594774 | orchestrator | 2025-06-02 20:02:56 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:56.595445 | orchestrator | 2025-06-02 20:02:56 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:56.595471 | orchestrator | 2025-06-02 20:02:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:02:59.630358 | orchestrator | 2025-06-02 20:02:59 | INFO  | Task 60f9e0f2-1b5b-4008-93d7-a94bcca79c02 is in state STARTED 2025-06-02 20:02:59.633116 | orchestrator | 2025-06-02 20:02:59 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:02:59.635068 | orchestrator | 2025-06-02 20:02:59 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:02:59.635237 | orchestrator | 2025-06-02 20:02:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:02.675419 | orchestrator | 2025-06-02 20:03:02 | INFO  | Task 60f9e0f2-1b5b-4008-93d7-a94bcca79c02 is in state STARTED 2025-06-02 20:03:02.676414 | orchestrator | 2025-06-02 20:03:02 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:02.678199 | orchestrator | 2025-06-02 20:03:02 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:02.678233 | orchestrator | 2025-06-02 20:03:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:05.720752 | orchestrator | 2025-06-02 20:03:05 | INFO  | Task 60f9e0f2-1b5b-4008-93d7-a94bcca79c02 is in state STARTED 2025-06-02 20:03:05.721166 | orchestrator | 2025-06-02 20:03:05 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:05.723505 | orchestrator | 2025-06-02 20:03:05 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:05.723548 | orchestrator | 2025-06-02 20:03:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:08.761895 | orchestrator | 2025-06-02 20:03:08 | INFO  | Task 60f9e0f2-1b5b-4008-93d7-a94bcca79c02 is in state SUCCESS 2025-06-02 20:03:08.765718 | orchestrator | 2025-06-02 20:03:08 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:08.767590 | orchestrator | 2025-06-02 20:03:08 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:08.767709 | orchestrator | 2025-06-02 20:03:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:11.808273 | orchestrator | 2025-06-02 20:03:11 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:11.810077 | orchestrator | 2025-06-02 20:03:11 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:11.810428 | orchestrator | 2025-06-02 20:03:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:14.854621 | orchestrator | 2025-06-02 20:03:14 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:14.856923 | orchestrator | 2025-06-02 20:03:14 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:14.857008 | orchestrator | 2025-06-02 20:03:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:17.905298 | orchestrator | 2025-06-02 20:03:17 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:17.907464 | orchestrator | 2025-06-02 20:03:17 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:17.907511 | orchestrator | 2025-06-02 20:03:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:20.955911 | orchestrator | 2025-06-02 20:03:20 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:20.957453 | orchestrator | 2025-06-02 20:03:20 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:20.957521 | orchestrator | 2025-06-02 20:03:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:23.997596 | orchestrator | 2025-06-02 20:03:23 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:23.998718 | orchestrator | 2025-06-02 20:03:23 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:23.998781 | orchestrator | 2025-06-02 20:03:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:27.048705 | orchestrator | 2025-06-02 20:03:27 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:27.050461 | orchestrator | 2025-06-02 20:03:27 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:27.050518 | orchestrator | 2025-06-02 20:03:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:30.087553 | orchestrator | 2025-06-02 20:03:30 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:30.089870 | orchestrator | 2025-06-02 20:03:30 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:30.089914 | orchestrator | 2025-06-02 20:03:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:33.142577 | orchestrator | 2025-06-02 20:03:33 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:33.144442 | orchestrator | 2025-06-02 20:03:33 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:33.144508 | orchestrator | 2025-06-02 20:03:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:36.199267 | orchestrator | 2025-06-02 20:03:36 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:36.199748 | orchestrator | 2025-06-02 20:03:36 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:36.199775 | orchestrator | 2025-06-02 20:03:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:39.238425 | orchestrator | 2025-06-02 20:03:39 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:39.239347 | orchestrator | 2025-06-02 20:03:39 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:39.239399 | orchestrator | 2025-06-02 20:03:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:42.289857 | orchestrator | 2025-06-02 20:03:42 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:42.290901 | orchestrator | 2025-06-02 20:03:42 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:42.290934 | orchestrator | 2025-06-02 20:03:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:45.344381 | orchestrator | 2025-06-02 20:03:45 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:45.344444 | orchestrator | 2025-06-02 20:03:45 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:45.344454 | orchestrator | 2025-06-02 20:03:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:48.386465 | orchestrator | 2025-06-02 20:03:48 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:48.386573 | orchestrator | 2025-06-02 20:03:48 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:48.386681 | orchestrator | 2025-06-02 20:03:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:51.433015 | orchestrator | 2025-06-02 20:03:51 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:51.435738 | orchestrator | 2025-06-02 20:03:51 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:51.435838 | orchestrator | 2025-06-02 20:03:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:54.478756 | orchestrator | 2025-06-02 20:03:54 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:54.481233 | orchestrator | 2025-06-02 20:03:54 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:54.481294 | orchestrator | 2025-06-02 20:03:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:03:57.525582 | orchestrator | 2025-06-02 20:03:57 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:03:57.526338 | orchestrator | 2025-06-02 20:03:57 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:03:57.528198 | orchestrator | 2025-06-02 20:03:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:00.576550 | orchestrator | 2025-06-02 20:04:00 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:00.577743 | orchestrator | 2025-06-02 20:04:00 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:00.578762 | orchestrator | 2025-06-02 20:04:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:03.643287 | orchestrator | 2025-06-02 20:04:03 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:03.644946 | orchestrator | 2025-06-02 20:04:03 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:03.644998 | orchestrator | 2025-06-02 20:04:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:06.689137 | orchestrator | 2025-06-02 20:04:06 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:06.689462 | orchestrator | 2025-06-02 20:04:06 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:06.689500 | orchestrator | 2025-06-02 20:04:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:09.733705 | orchestrator | 2025-06-02 20:04:09 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:09.733805 | orchestrator | 2025-06-02 20:04:09 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:09.733820 | orchestrator | 2025-06-02 20:04:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:12.773315 | orchestrator | 2025-06-02 20:04:12 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:12.774482 | orchestrator | 2025-06-02 20:04:12 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:12.774670 | orchestrator | 2025-06-02 20:04:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:15.823096 | orchestrator | 2025-06-02 20:04:15 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:15.825002 | orchestrator | 2025-06-02 20:04:15 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:15.825116 | orchestrator | 2025-06-02 20:04:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:18.876495 | orchestrator | 2025-06-02 20:04:18 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:18.878073 | orchestrator | 2025-06-02 20:04:18 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:18.878091 | orchestrator | 2025-06-02 20:04:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:21.936240 | orchestrator | 2025-06-02 20:04:21 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:21.936636 | orchestrator | 2025-06-02 20:04:21 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:21.936665 | orchestrator | 2025-06-02 20:04:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:24.984233 | orchestrator | 2025-06-02 20:04:24 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:24.985353 | orchestrator | 2025-06-02 20:04:24 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:24.985396 | orchestrator | 2025-06-02 20:04:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:28.043405 | orchestrator | 2025-06-02 20:04:28 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:28.043495 | orchestrator | 2025-06-02 20:04:28 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:28.043512 | orchestrator | 2025-06-02 20:04:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:31.091358 | orchestrator | 2025-06-02 20:04:31 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:31.093097 | orchestrator | 2025-06-02 20:04:31 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:31.093742 | orchestrator | 2025-06-02 20:04:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:34.132561 | orchestrator | 2025-06-02 20:04:34 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:34.133222 | orchestrator | 2025-06-02 20:04:34 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state STARTED 2025-06-02 20:04:34.133348 | orchestrator | 2025-06-02 20:04:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:37.180773 | orchestrator | 2025-06-02 20:04:37 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:37.186499 | orchestrator | 2025-06-02 20:04:37 | INFO  | Task 4b60122b-2761-44e0-9fd9-858842834dfd is in state SUCCESS 2025-06-02 20:04:37.188257 | orchestrator | 2025-06-02 20:04:37.188289 | orchestrator | None 2025-06-02 20:04:37.188298 | orchestrator | 2025-06-02 20:04:37.188307 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:04:37.188317 | orchestrator | 2025-06-02 20:04:37.188325 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:04:37.188348 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:00.345) 0:00:00.345 *********** 2025-06-02 20:04:37.188357 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.188366 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.188374 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.188382 | orchestrator | 2025-06-02 20:04:37.188391 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:04:37.188399 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:00.721) 0:00:01.067 *********** 2025-06-02 20:04:37.188408 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-02 20:04:37.188416 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-02 20:04:37.188424 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-02 20:04:37.188432 | orchestrator | 2025-06-02 20:04:37.188441 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-02 20:04:37.188449 | orchestrator | 2025-06-02 20:04:37.188488 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 20:04:37.188522 | orchestrator | Monday 02 June 2025 19:58:22 +0000 (0:00:00.919) 0:00:01.989 *********** 2025-06-02 20:04:37.188531 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.188539 | orchestrator | 2025-06-02 20:04:37.188547 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-02 20:04:37.188555 | orchestrator | Monday 02 June 2025 19:58:24 +0000 (0:00:01.313) 0:00:03.303 *********** 2025-06-02 20:04:37.188563 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.188616 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.188625 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.188633 | orchestrator | 2025-06-02 20:04:37.188641 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 20:04:37.188649 | orchestrator | Monday 02 June 2025 19:58:25 +0000 (0:00:00.963) 0:00:04.266 *********** 2025-06-02 20:04:37.188656 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.188664 | orchestrator | 2025-06-02 20:04:37.188672 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-02 20:04:37.188680 | orchestrator | Monday 02 June 2025 19:58:27 +0000 (0:00:01.862) 0:00:06.129 *********** 2025-06-02 20:04:37.188688 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.188695 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.188703 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.188711 | orchestrator | 2025-06-02 20:04:37.188719 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-02 20:04:37.188726 | orchestrator | Monday 02 June 2025 19:58:27 +0000 (0:00:00.863) 0:00:06.992 *********** 2025-06-02 20:04:37.188734 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:04:37.188742 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:04:37.188750 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:04:37.188757 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:04:37.188776 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:04:37.188807 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 20:04:37.188815 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 20:04:37.188825 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 20:04:37.188832 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 20:04:37.188840 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 20:04:37.188848 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 20:04:37.188855 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 20:04:37.188863 | orchestrator | 2025-06-02 20:04:37.188871 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 20:04:37.188913 | orchestrator | Monday 02 June 2025 19:58:30 +0000 (0:00:02.802) 0:00:09.795 *********** 2025-06-02 20:04:37.188922 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 20:04:37.188930 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 20:04:37.188938 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 20:04:37.188946 | orchestrator | 2025-06-02 20:04:37.188954 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 20:04:37.188961 | orchestrator | Monday 02 June 2025 19:58:31 +0000 (0:00:01.023) 0:00:10.819 *********** 2025-06-02 20:04:37.188969 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 20:04:37.188986 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 20:04:37.188993 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 20:04:37.189001 | orchestrator | 2025-06-02 20:04:37.189009 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 20:04:37.189035 | orchestrator | Monday 02 June 2025 19:58:33 +0000 (0:00:01.637) 0:00:12.456 *********** 2025-06-02 20:04:37.189044 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-02 20:04:37.189052 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.189070 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-02 20:04:37.189079 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.189087 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-02 20:04:37.189095 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.189124 | orchestrator | 2025-06-02 20:04:37.189140 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-02 20:04:37.189148 | orchestrator | Monday 02 June 2025 19:58:34 +0000 (0:00:00.752) 0:00:13.208 *********** 2025-06-02 20:04:37.189160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.189173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.189253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.189263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.189272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.189295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.189309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.189318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.189327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.189363 | orchestrator | 2025-06-02 20:04:37.189372 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-02 20:04:37.189380 | orchestrator | Monday 02 June 2025 19:58:36 +0000 (0:00:02.883) 0:00:16.092 *********** 2025-06-02 20:04:37.189388 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.189396 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.189426 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.189434 | orchestrator | 2025-06-02 20:04:37.189442 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-02 20:04:37.189450 | orchestrator | Monday 02 June 2025 19:58:38 +0000 (0:00:01.022) 0:00:17.114 *********** 2025-06-02 20:04:37.189458 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-02 20:04:37.189465 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-02 20:04:37.189506 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-02 20:04:37.189515 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-02 20:04:37.189523 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-02 20:04:37.189531 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-02 20:04:37.189539 | orchestrator | 2025-06-02 20:04:37.189547 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-02 20:04:37.189561 | orchestrator | Monday 02 June 2025 19:58:40 +0000 (0:00:02.697) 0:00:19.812 *********** 2025-06-02 20:04:37.189569 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.189577 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.189585 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.189592 | orchestrator | 2025-06-02 20:04:37.189600 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-02 20:04:37.189608 | orchestrator | Monday 02 June 2025 19:58:42 +0000 (0:00:01.865) 0:00:21.677 *********** 2025-06-02 20:04:37.189616 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.189624 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.189632 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.189640 | orchestrator | 2025-06-02 20:04:37.189671 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-02 20:04:37.189679 | orchestrator | Monday 02 June 2025 19:58:45 +0000 (0:00:03.093) 0:00:24.772 *********** 2025-06-02 20:04:37.189688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.189709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.189718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.189727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.189736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:04:37.189751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.189759 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.189767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.189781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:04:37.189827 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.189840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.189849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.189857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.189874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:04:37.189882 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.189890 | orchestrator | 2025-06-02 20:04:37.189898 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-02 20:04:37.189906 | orchestrator | Monday 02 June 2025 19:58:46 +0000 (0:00:00.932) 0:00:25.704 *********** 2025-06-02 20:04:37.189914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.189927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.189939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.189948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.189956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.189969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:04:37.189977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.189986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.190003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:04:37.190068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.190081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.190100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f', '__omit_place_holder__fad53d304b857156c4f04381d0ce026b551d6e6f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 20:04:37.190108 | orchestrator | 2025-06-02 20:04:37.190116 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-02 20:04:37.190124 | orchestrator | Monday 02 June 2025 19:58:49 +0000 (0:00:03.316) 0:00:29.021 *********** 2025-06-02 20:04:37.190133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.190141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.190166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.190176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.190278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.190296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.190305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.190356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.190386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.190440 | orchestrator | 2025-06-02 20:04:37.190450 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-02 20:04:37.190458 | orchestrator | Monday 02 June 2025 19:58:54 +0000 (0:00:04.499) 0:00:33.521 *********** 2025-06-02 20:04:37.190467 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 20:04:37.190935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 20:04:37.191002 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 20:04:37.191011 | orchestrator | 2025-06-02 20:04:37.191074 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-02 20:04:37.191084 | orchestrator | Monday 02 June 2025 19:58:56 +0000 (0:00:02.015) 0:00:35.536 *********** 2025-06-02 20:04:37.191090 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 20:04:37.191097 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 20:04:37.191104 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 20:04:37.191126 | orchestrator | 2025-06-02 20:04:37.191134 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-02 20:04:37.191141 | orchestrator | Monday 02 June 2025 19:58:59 +0000 (0:00:03.548) 0:00:39.085 *********** 2025-06-02 20:04:37.191149 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.191156 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.191163 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.191170 | orchestrator | 2025-06-02 20:04:37.191177 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-02 20:04:37.191184 | orchestrator | Monday 02 June 2025 19:59:00 +0000 (0:00:00.813) 0:00:39.898 *********** 2025-06-02 20:04:37.191190 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 20:04:37.191199 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 20:04:37.191206 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 20:04:37.191213 | orchestrator | 2025-06-02 20:04:37.191220 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-02 20:04:37.191227 | orchestrator | Monday 02 June 2025 19:59:03 +0000 (0:00:03.015) 0:00:42.914 *********** 2025-06-02 20:04:37.191235 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 20:04:37.191243 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 20:04:37.191250 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 20:04:37.191257 | orchestrator | 2025-06-02 20:04:37.191264 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-02 20:04:37.191272 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:02.446) 0:00:45.360 *********** 2025-06-02 20:04:37.191280 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-02 20:04:37.191287 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-02 20:04:37.191294 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-02 20:04:37.191301 | orchestrator | 2025-06-02 20:04:37.191307 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-02 20:04:37.191316 | orchestrator | Monday 02 June 2025 19:59:07 +0000 (0:00:01.645) 0:00:47.006 *********** 2025-06-02 20:04:37.191323 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-02 20:04:37.191330 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-02 20:04:37.191338 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-02 20:04:37.191344 | orchestrator | 2025-06-02 20:04:37.191350 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 20:04:37.191357 | orchestrator | Monday 02 June 2025 19:59:09 +0000 (0:00:01.637) 0:00:48.643 *********** 2025-06-02 20:04:37.191363 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.191370 | orchestrator | 2025-06-02 20:04:37.191376 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-02 20:04:37.191383 | orchestrator | Monday 02 June 2025 19:59:10 +0000 (0:00:00.594) 0:00:49.237 *********** 2025-06-02 20:04:37.191392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.191429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.191439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.191447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.191455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.191464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.191474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.191487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.191504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.191514 | orchestrator | 2025-06-02 20:04:37.191523 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-02 20:04:37.191531 | orchestrator | Monday 02 June 2025 19:59:13 +0000 (0:00:03.292) 0:00:52.530 *********** 2025-06-02 20:04:37.191538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.191546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.191555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.191563 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.191572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.191585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.191602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.191611 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.191618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.191626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.191634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.191640 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.191647 | orchestrator | 2025-06-02 20:04:37.191653 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-02 20:04:37.191661 | orchestrator | Monday 02 June 2025 19:59:14 +0000 (0:00:00.966) 0:00:53.497 *********** 2025-06-02 20:04:37.191669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.191682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.191693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.191701 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.191712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.191720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.191728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.191735 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.191744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.191757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.191765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.191773 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.191781 | orchestrator | 2025-06-02 20:04:37.191789 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 20:04:37.191799 | orchestrator | Monday 02 June 2025 19:59:16 +0000 (0:00:02.087) 0:00:55.584 *********** 2025-06-02 20:04:37.191815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.191823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.191830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.191837 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.191844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.191854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.191861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.191868 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.191881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.191909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.191917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.191925 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.191931 | orchestrator | 2025-06-02 20:04:37.191938 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 20:04:37.191945 | orchestrator | Monday 02 June 2025 19:59:17 +0000 (0:00:01.482) 0:00:57.067 *********** 2025-06-02 20:04:37.191952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.191966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.191973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.191985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.191996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192003 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.192011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192032 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.192040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192067 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.192074 | orchestrator | 2025-06-02 20:04:37.192082 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 20:04:37.192089 | orchestrator | Monday 02 June 2025 19:59:18 +0000 (0:00:00.724) 0:00:57.791 *********** 2025-06-02 20:04:37.192097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192130 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.192137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192171 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.192178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192209 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.192215 | orchestrator | 2025-06-02 20:04:37.192222 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-02 20:04:37.192229 | orchestrator | Monday 02 June 2025 19:59:20 +0000 (0:00:01.352) 0:00:59.144 *********** 2025-06-02 20:04:37.192236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192263 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.192270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192300 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.192308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192336 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.192343 | orchestrator | 2025-06-02 20:04:37.192350 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-02 20:04:37.192357 | orchestrator | Monday 02 June 2025 19:59:20 +0000 (0:00:00.608) 0:00:59.753 *********** 2025-06-02 20:04:37.192365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192394 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.192401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192426 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.192433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192453 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.192460 | orchestrator | 2025-06-02 20:04:37.192466 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-02 20:04:37.192476 | orchestrator | Monday 02 June 2025 19:59:21 +0000 (0:00:00.589) 0:01:00.342 *********** 2025-06-02 20:04:37.192485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192510 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.192517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192537 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.192551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 20:04:37.192567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 20:04:37.192575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 20:04:37.192582 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.192589 | orchestrator | 2025-06-02 20:04:37.192596 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-02 20:04:37.192603 | orchestrator | Monday 02 June 2025 19:59:22 +0000 (0:00:01.323) 0:01:01.665 *********** 2025-06-02 20:04:37.192610 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 20:04:37.192617 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 20:04:37.192624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 20:04:37.192631 | orchestrator | 2025-06-02 20:04:37.192638 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-02 20:04:37.192645 | orchestrator | Monday 02 June 2025 19:59:24 +0000 (0:00:01.636) 0:01:03.302 *********** 2025-06-02 20:04:37.192652 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 20:04:37.192659 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 20:04:37.192666 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 20:04:37.192673 | orchestrator | 2025-06-02 20:04:37.192680 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-02 20:04:37.192687 | orchestrator | Monday 02 June 2025 19:59:25 +0000 (0:00:01.716) 0:01:05.018 *********** 2025-06-02 20:04:37.192694 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:04:37.192701 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:04:37.192708 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:04:37.192715 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:04:37.192722 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.192729 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:04:37.192736 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.192747 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:04:37.192754 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.192761 | orchestrator | 2025-06-02 20:04:37.192768 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-02 20:04:37.192775 | orchestrator | Monday 02 June 2025 19:59:27 +0000 (0:00:01.547) 0:01:06.566 *********** 2025-06-02 20:04:37.192788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.192800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.192808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 20:04:37.192815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.192823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.192830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 20:04:37.192842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.192855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.192864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 20:04:37.192871 | orchestrator | 2025-06-02 20:04:37.192879 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-02 20:04:37.192886 | orchestrator | Monday 02 June 2025 19:59:30 +0000 (0:00:02.800) 0:01:09.367 *********** 2025-06-02 20:04:37.192895 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.192907 | orchestrator | 2025-06-02 20:04:37.192919 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-02 20:04:37.192927 | orchestrator | Monday 02 June 2025 19:59:31 +0000 (0:00:00.854) 0:01:10.221 *********** 2025-06-02 20:04:37.192935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 20:04:37.192943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.192957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.192965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.193688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 20:04:37.193727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.193736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.193743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 20:04:37.193761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.193769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.193784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.193792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.193800 | orchestrator | 2025-06-02 20:04:37.193807 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-02 20:04:37.193815 | orchestrator | Monday 02 June 2025 19:59:35 +0000 (0:00:04.791) 0:01:15.013 *********** 2025-06-02 20:04:37.193823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 20:04:37.193830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.193841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.193867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.193876 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.193892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 20:04:37.193900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.193908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.193916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.193929 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.193937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 20:04:37.193944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.193959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.193966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.193974 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.193981 | orchestrator | 2025-06-02 20:04:37.193988 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-02 20:04:37.193995 | orchestrator | Monday 02 June 2025 19:59:36 +0000 (0:00:00.749) 0:01:15.763 *********** 2025-06-02 20:04:37.194003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:04:37.194011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:04:37.194091 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.194104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:04:37.194112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:04:37.194119 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.194127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:04:37.194134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 20:04:37.194142 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.194150 | orchestrator | 2025-06-02 20:04:37.194157 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-02 20:04:37.194165 | orchestrator | Monday 02 June 2025 19:59:37 +0000 (0:00:01.228) 0:01:16.991 *********** 2025-06-02 20:04:37.194172 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.194180 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.194188 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.194195 | orchestrator | 2025-06-02 20:04:37.194203 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-02 20:04:37.194210 | orchestrator | Monday 02 June 2025 19:59:39 +0000 (0:00:01.591) 0:01:18.583 *********** 2025-06-02 20:04:37.194217 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.194225 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.194232 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.194239 | orchestrator | 2025-06-02 20:04:37.194248 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-02 20:04:37.194256 | orchestrator | Monday 02 June 2025 19:59:41 +0000 (0:00:02.279) 0:01:20.862 *********** 2025-06-02 20:04:37.194264 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.194273 | orchestrator | 2025-06-02 20:04:37.194282 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-02 20:04:37.194290 | orchestrator | Monday 02 June 2025 19:59:42 +0000 (0:00:00.658) 0:01:21.521 *********** 2025-06-02 20:04:37.194310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.194320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.194349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.194361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194401 | orchestrator | 2025-06-02 20:04:37.194409 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-02 20:04:37.194416 | orchestrator | Monday 02 June 2025 19:59:48 +0000 (0:00:05.693) 0:01:27.214 *********** 2025-06-02 20:04:37.194422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.194430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194458 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.194465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.194479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.194495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194503 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.194514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.194537 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.194544 | orchestrator | 2025-06-02 20:04:37.194551 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-02 20:04:37.194558 | orchestrator | Monday 02 June 2025 19:59:48 +0000 (0:00:00.679) 0:01:27.894 *********** 2025-06-02 20:04:37.194565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:04:37.194572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:04:37.194581 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.194588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:04:37.194595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:04:37.194603 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.194610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:04:37.194617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 20:04:37.194624 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.194632 | orchestrator | 2025-06-02 20:04:37.194639 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-02 20:04:37.194646 | orchestrator | Monday 02 June 2025 19:59:49 +0000 (0:00:00.852) 0:01:28.746 *********** 2025-06-02 20:04:37.194653 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.194660 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.194668 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.194674 | orchestrator | 2025-06-02 20:04:37.194681 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-02 20:04:37.194688 | orchestrator | Monday 02 June 2025 19:59:51 +0000 (0:00:01.862) 0:01:30.608 *********** 2025-06-02 20:04:37.194695 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.194702 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.194709 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.194716 | orchestrator | 2025-06-02 20:04:37.194723 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-02 20:04:37.194730 | orchestrator | Monday 02 June 2025 19:59:53 +0000 (0:00:02.114) 0:01:32.722 *********** 2025-06-02 20:04:37.194737 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.194744 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.194751 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.194759 | orchestrator | 2025-06-02 20:04:37.194766 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-02 20:04:37.194773 | orchestrator | Monday 02 June 2025 19:59:53 +0000 (0:00:00.330) 0:01:33.053 *********** 2025-06-02 20:04:37.194780 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.194792 | orchestrator | 2025-06-02 20:04:37.194799 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-02 20:04:37.194806 | orchestrator | Monday 02 June 2025 19:59:54 +0000 (0:00:00.659) 0:01:33.713 *********** 2025-06-02 20:04:37.194822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 20:04:37.194830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 20:04:37.194838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 20:04:37.194844 | orchestrator | 2025-06-02 20:04:37.194851 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-02 20:04:37.194858 | orchestrator | Monday 02 June 2025 19:59:57 +0000 (0:00:03.041) 0:01:36.754 *********** 2025-06-02 20:04:37.194865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 20:04:37.194872 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.194879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 20:04:37.194891 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.194905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 20:04:37.194912 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.194919 | orchestrator | 2025-06-02 20:04:37.194926 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-02 20:04:37.194933 | orchestrator | Monday 02 June 2025 19:59:59 +0000 (0:00:01.514) 0:01:38.269 *********** 2025-06-02 20:04:37.194941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:04:37.194950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:04:37.194958 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.194965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:04:37.194972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:04:37.194980 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.194987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:04:37.194999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 20:04:37.195006 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.195013 | orchestrator | 2025-06-02 20:04:37.195069 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-02 20:04:37.195077 | orchestrator | Monday 02 June 2025 20:00:00 +0000 (0:00:01.684) 0:01:39.954 *********** 2025-06-02 20:04:37.195083 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.195089 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.195094 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.195099 | orchestrator | 2025-06-02 20:04:37.195105 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-02 20:04:37.195110 | orchestrator | Monday 02 June 2025 20:00:01 +0000 (0:00:00.905) 0:01:40.859 *********** 2025-06-02 20:04:37.195117 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.195123 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.195129 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.195134 | orchestrator | 2025-06-02 20:04:37.195140 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-02 20:04:37.195151 | orchestrator | Monday 02 June 2025 20:00:03 +0000 (0:00:01.251) 0:01:42.111 *********** 2025-06-02 20:04:37.195157 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.195163 | orchestrator | 2025-06-02 20:04:37.195172 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-02 20:04:37.195178 | orchestrator | Monday 02 June 2025 20:00:03 +0000 (0:00:00.722) 0:01:42.833 *********** 2025-06-02 20:04:37.195185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.195194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.195205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.195265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195292 | orchestrator | 2025-06-02 20:04:37.195299 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-02 20:04:37.195306 | orchestrator | Monday 02 June 2025 20:00:07 +0000 (0:00:03.895) 0:01:46.729 *********** 2025-06-02 20:04:37.195317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.195325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195359 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.195366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.195377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195397 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.195411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.195418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.195444 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.195451 | orchestrator | 2025-06-02 20:04:37.195458 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-02 20:04:37.195464 | orchestrator | Monday 02 June 2025 20:00:08 +0000 (0:00:01.290) 0:01:48.020 *********** 2025-06-02 20:04:37.195471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:04:37.195479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:04:37.195487 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.195493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:04:37.195500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:04:37.195506 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.195517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:04:37.195527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 20:04:37.195533 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.195540 | orchestrator | 2025-06-02 20:04:37.195547 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-02 20:04:37.195554 | orchestrator | Monday 02 June 2025 20:00:09 +0000 (0:00:01.060) 0:01:49.081 *********** 2025-06-02 20:04:37.195560 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.195566 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.195572 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.195579 | orchestrator | 2025-06-02 20:04:37.195590 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-02 20:04:37.195597 | orchestrator | Monday 02 June 2025 20:00:11 +0000 (0:00:01.429) 0:01:50.510 *********** 2025-06-02 20:04:37.195603 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.195610 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.195617 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.195624 | orchestrator | 2025-06-02 20:04:37.195630 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-02 20:04:37.195637 | orchestrator | Monday 02 June 2025 20:00:13 +0000 (0:00:01.973) 0:01:52.484 *********** 2025-06-02 20:04:37.195643 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.195650 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.195656 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.195662 | orchestrator | 2025-06-02 20:04:37.195668 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-02 20:04:37.195676 | orchestrator | Monday 02 June 2025 20:00:13 +0000 (0:00:00.534) 0:01:53.018 *********** 2025-06-02 20:04:37.195682 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.195690 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.195697 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.195703 | orchestrator | 2025-06-02 20:04:37.195710 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-02 20:04:37.195716 | orchestrator | Monday 02 June 2025 20:00:14 +0000 (0:00:00.356) 0:01:53.374 *********** 2025-06-02 20:04:37.195723 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.195729 | orchestrator | 2025-06-02 20:04:37.195736 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-02 20:04:37.195743 | orchestrator | Monday 02 June 2025 20:00:15 +0000 (0:00:00.741) 0:01:54.116 *********** 2025-06-02 20:04:37.195751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:04:37.195759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:04:37.196334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:04:37.196377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:04:37.196384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:04:37.196480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:04:37.196486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196517 | orchestrator | 2025-06-02 20:04:37.196524 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-02 20:04:37.196534 | orchestrator | Monday 02 June 2025 20:00:18 +0000 (0:00:03.939) 0:01:58.055 *********** 2025-06-02 20:04:37.196546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:04:37.196553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:04:37.196559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196598 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.196609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:04:37.196615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:04:37.196621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:04:37.196627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:04:37.196648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196705 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.196712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.196726 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.196732 | orchestrator | 2025-06-02 20:04:37.196739 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-02 20:04:37.196745 | orchestrator | Monday 02 June 2025 20:00:19 +0000 (0:00:00.809) 0:01:58.865 *********** 2025-06-02 20:04:37.196752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:04:37.196758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:04:37.196765 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.196772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:04:37.196779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:04:37.196785 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.196792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:04:37.196799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 20:04:37.196810 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.196816 | orchestrator | 2025-06-02 20:04:37.196822 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-02 20:04:37.196828 | orchestrator | Monday 02 June 2025 20:00:20 +0000 (0:00:01.012) 0:01:59.878 *********** 2025-06-02 20:04:37.196834 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.196840 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.196845 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.196851 | orchestrator | 2025-06-02 20:04:37.196857 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-02 20:04:37.196863 | orchestrator | Monday 02 June 2025 20:00:22 +0000 (0:00:01.754) 0:02:01.632 *********** 2025-06-02 20:04:37.196869 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.196876 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.196882 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.196888 | orchestrator | 2025-06-02 20:04:37.196895 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-02 20:04:37.196902 | orchestrator | Monday 02 June 2025 20:00:24 +0000 (0:00:01.942) 0:02:03.575 *********** 2025-06-02 20:04:37.196909 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.196916 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.196923 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.196930 | orchestrator | 2025-06-02 20:04:37.196959 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-02 20:04:37.196967 | orchestrator | Monday 02 June 2025 20:00:24 +0000 (0:00:00.318) 0:02:03.894 *********** 2025-06-02 20:04:37.196975 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.196983 | orchestrator | 2025-06-02 20:04:37.196990 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-02 20:04:37.196997 | orchestrator | Monday 02 June 2025 20:00:25 +0000 (0:00:00.819) 0:02:04.714 *********** 2025-06-02 20:04:37.197038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:04:37.197049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:04:37.197070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.197078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.197101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:04:37.197110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.197124 | orchestrator | 2025-06-02 20:04:37.197131 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-02 20:04:37.197139 | orchestrator | Monday 02 June 2025 20:00:30 +0000 (0:00:04.612) 0:02:09.327 *********** 2025-06-02 20:04:37.197153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:04:37.197166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.197177 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.197185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:04:37.197202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.197217 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.197225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:04:37.197242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.197256 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.197263 | orchestrator | 2025-06-02 20:04:37.197272 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-02 20:04:37.197280 | orchestrator | Monday 02 June 2025 20:00:33 +0000 (0:00:02.964) 0:02:12.291 *********** 2025-06-02 20:04:37.197287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:04:37.197295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:04:37.197302 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.197310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:04:37.197317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:04:37.197324 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.197334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:04:37.197345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 20:04:37.197351 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.197357 | orchestrator | 2025-06-02 20:04:37.197364 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-02 20:04:37.197370 | orchestrator | Monday 02 June 2025 20:00:38 +0000 (0:00:05.166) 0:02:17.458 *********** 2025-06-02 20:04:37.197382 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.197389 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.197396 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.197403 | orchestrator | 2025-06-02 20:04:37.197409 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-02 20:04:37.197415 | orchestrator | Monday 02 June 2025 20:00:39 +0000 (0:00:01.627) 0:02:19.085 *********** 2025-06-02 20:04:37.197421 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.197428 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.197434 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.197440 | orchestrator | 2025-06-02 20:04:37.197446 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-02 20:04:37.197452 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:02.183) 0:02:21.269 *********** 2025-06-02 20:04:37.197458 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.197464 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.197470 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.197476 | orchestrator | 2025-06-02 20:04:37.197482 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-02 20:04:37.197488 | orchestrator | Monday 02 June 2025 20:00:42 +0000 (0:00:00.371) 0:02:21.640 *********** 2025-06-02 20:04:37.197494 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.197501 | orchestrator | 2025-06-02 20:04:37.197507 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-02 20:04:37.197514 | orchestrator | Monday 02 June 2025 20:00:43 +0000 (0:00:00.805) 0:02:22.445 *********** 2025-06-02 20:04:37.197522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:04:37.197530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:04:37.197538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:04:37.197545 | orchestrator | 2025-06-02 20:04:37.197556 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-02 20:04:37.197563 | orchestrator | Monday 02 June 2025 20:00:46 +0000 (0:00:03.622) 0:02:26.068 *********** 2025-06-02 20:04:37.197579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:04:37.197587 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.197594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:04:37.197601 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.197608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:04:37.197616 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.197623 | orchestrator | 2025-06-02 20:04:37.197630 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-02 20:04:37.197637 | orchestrator | Monday 02 June 2025 20:00:47 +0000 (0:00:00.410) 0:02:26.479 *********** 2025-06-02 20:04:37.197644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:04:37.197651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:04:37.197659 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.197666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:04:37.197674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:04:37.197680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:04:37.197687 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.197694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 20:04:37.197706 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.197714 | orchestrator | 2025-06-02 20:04:37.197721 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-02 20:04:37.197727 | orchestrator | Monday 02 June 2025 20:00:48 +0000 (0:00:00.811) 0:02:27.290 *********** 2025-06-02 20:04:37.197734 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.197741 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.197748 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.197755 | orchestrator | 2025-06-02 20:04:37.197765 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-02 20:04:37.197773 | orchestrator | Monday 02 June 2025 20:00:49 +0000 (0:00:01.644) 0:02:28.934 *********** 2025-06-02 20:04:37.197780 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.197786 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.197793 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.197800 | orchestrator | 2025-06-02 20:04:37.197807 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-02 20:04:37.197814 | orchestrator | Monday 02 June 2025 20:00:51 +0000 (0:00:01.853) 0:02:30.788 *********** 2025-06-02 20:04:37.197821 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.197828 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.197838 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.197843 | orchestrator | 2025-06-02 20:04:37.197849 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-02 20:04:37.197855 | orchestrator | Monday 02 June 2025 20:00:52 +0000 (0:00:00.531) 0:02:31.320 *********** 2025-06-02 20:04:37.197860 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.197866 | orchestrator | 2025-06-02 20:04:37.197873 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-02 20:04:37.197879 | orchestrator | Monday 02 June 2025 20:00:53 +0000 (0:00:01.424) 0:02:32.745 *********** 2025-06-02 20:04:37.197887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:04:37.199229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:04:37.199269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:04:37.199287 | orchestrator | 2025-06-02 20:04:37.199295 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-02 20:04:37.199302 | orchestrator | Monday 02 June 2025 20:00:58 +0000 (0:00:04.747) 0:02:37.493 *********** 2025-06-02 20:04:37.199370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:04:37.199382 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.199389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:04:37.199403 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.199465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:04:37.199475 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.199481 | orchestrator | 2025-06-02 20:04:37.199488 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-02 20:04:37.199495 | orchestrator | Monday 02 June 2025 20:00:59 +0000 (0:00:00.768) 0:02:38.261 *********** 2025-06-02 20:04:37.199502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:04:37.199516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:04:37.199525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:04:37.199532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:04:37.199539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 20:04:37.199547 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.199553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:04:37.199603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:04:37.199614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:04:37.199620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:04:37.199627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 20:04:37.199633 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.199639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:04:37.199646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:04:37.199652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 20:04:37.199665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 20:04:37.199672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 20:04:37.199678 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.199684 | orchestrator | 2025-06-02 20:04:37.199690 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-02 20:04:37.199696 | orchestrator | Monday 02 June 2025 20:01:00 +0000 (0:00:01.140) 0:02:39.402 *********** 2025-06-02 20:04:37.199702 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.199708 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.199714 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.199720 | orchestrator | 2025-06-02 20:04:37.199726 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-02 20:04:37.199732 | orchestrator | Monday 02 June 2025 20:01:01 +0000 (0:00:01.443) 0:02:40.845 *********** 2025-06-02 20:04:37.199738 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.199744 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.199751 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.199757 | orchestrator | 2025-06-02 20:04:37.199763 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-02 20:04:37.199769 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:01.701) 0:02:42.547 *********** 2025-06-02 20:04:37.199775 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.199781 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.199787 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.199793 | orchestrator | 2025-06-02 20:04:37.199799 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-02 20:04:37.199805 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:00.231) 0:02:42.778 *********** 2025-06-02 20:04:37.199811 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.199817 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.199823 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.199829 | orchestrator | 2025-06-02 20:04:37.199835 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-02 20:04:37.199841 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:00.260) 0:02:43.039 *********** 2025-06-02 20:04:37.199847 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.199853 | orchestrator | 2025-06-02 20:04:37.199859 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-02 20:04:37.199913 | orchestrator | Monday 02 June 2025 20:01:04 +0000 (0:00:00.998) 0:02:44.038 *********** 2025-06-02 20:04:37.199926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:04:37.199939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:04:37.199947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:04:37.199954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:04:37.199960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:04:37.200005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:04:37.200013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:04:37.200049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:04:37.200055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:04:37.200063 | orchestrator | 2025-06-02 20:04:37.200069 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-02 20:04:37.200076 | orchestrator | Monday 02 June 2025 20:01:08 +0000 (0:00:03.592) 0:02:47.630 *********** 2025-06-02 20:04:37.200083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:04:37.200133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:04:37.200142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:04:37.200153 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.200161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:04:37.200168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:04:37.200174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:04:37.200180 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.200237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:04:37.200253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:04:37.200260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:04:37.200267 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.200273 | orchestrator | 2025-06-02 20:04:37.200279 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-02 20:04:37.200286 | orchestrator | Monday 02 June 2025 20:01:09 +0000 (0:00:00.534) 0:02:48.164 *********** 2025-06-02 20:04:37.200294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:04:37.200302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:04:37.200309 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.200316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:04:37.200323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:04:37.200330 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.200337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:04:37.200344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 20:04:37.200350 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.200357 | orchestrator | 2025-06-02 20:04:37.200364 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-02 20:04:37.200371 | orchestrator | Monday 02 June 2025 20:01:09 +0000 (0:00:00.861) 0:02:49.026 *********** 2025-06-02 20:04:37.200378 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.200384 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.200395 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.200401 | orchestrator | 2025-06-02 20:04:37.200408 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-02 20:04:37.200456 | orchestrator | Monday 02 June 2025 20:01:11 +0000 (0:00:01.237) 0:02:50.263 *********** 2025-06-02 20:04:37.200464 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.200471 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.200478 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.200484 | orchestrator | 2025-06-02 20:04:37.200491 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-02 20:04:37.200498 | orchestrator | Monday 02 June 2025 20:01:12 +0000 (0:00:01.808) 0:02:52.071 *********** 2025-06-02 20:04:37.200505 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.200511 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.200518 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.200525 | orchestrator | 2025-06-02 20:04:37.200534 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-02 20:04:37.200541 | orchestrator | Monday 02 June 2025 20:01:13 +0000 (0:00:00.304) 0:02:52.375 *********** 2025-06-02 20:04:37.200547 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.200553 | orchestrator | 2025-06-02 20:04:37.200560 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-02 20:04:37.200566 | orchestrator | Monday 02 June 2025 20:01:14 +0000 (0:00:01.204) 0:02:53.580 *********** 2025-06-02 20:04:37.200573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:04:37.200581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.200589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:04:37.200639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.200653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:04:37.200660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.200668 | orchestrator | 2025-06-02 20:04:37.200675 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-02 20:04:37.200682 | orchestrator | Monday 02 June 2025 20:01:17 +0000 (0:00:03.285) 0:02:56.866 *********** 2025-06-02 20:04:37.200689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:04:37.200696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.200706 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.200763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:04:37.200776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.200783 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.200791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:04:37.200798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.200810 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.200817 | orchestrator | 2025-06-02 20:04:37.200825 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-02 20:04:37.200831 | orchestrator | Monday 02 June 2025 20:01:18 +0000 (0:00:00.607) 0:02:57.474 *********** 2025-06-02 20:04:37.200838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:04:37.200845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:04:37.200851 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.200857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:04:37.200864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:04:37.200871 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.200922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:04:37.200931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 20:04:37.200939 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.200946 | orchestrator | 2025-06-02 20:04:37.200953 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-02 20:04:37.200960 | orchestrator | Monday 02 June 2025 20:01:19 +0000 (0:00:01.242) 0:02:58.716 *********** 2025-06-02 20:04:37.200970 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.200977 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.200984 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.200991 | orchestrator | 2025-06-02 20:04:37.200998 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-02 20:04:37.201005 | orchestrator | Monday 02 June 2025 20:01:20 +0000 (0:00:01.180) 0:02:59.897 *********** 2025-06-02 20:04:37.201012 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.201036 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.201043 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.201049 | orchestrator | 2025-06-02 20:04:37.201055 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-02 20:04:37.201061 | orchestrator | Monday 02 June 2025 20:01:22 +0000 (0:00:01.821) 0:03:01.718 *********** 2025-06-02 20:04:37.201068 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.201074 | orchestrator | 2025-06-02 20:04:37.201081 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-02 20:04:37.201088 | orchestrator | Monday 02 June 2025 20:01:23 +0000 (0:00:00.907) 0:03:02.626 *********** 2025-06-02 20:04:37.201096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 20:04:37.201109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 20:04:37.201186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 20:04:37.201213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201282 | orchestrator | 2025-06-02 20:04:37.201289 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-02 20:04:37.201296 | orchestrator | Monday 02 June 2025 20:01:27 +0000 (0:00:03.932) 0:03:06.558 *********** 2025-06-02 20:04:37.201308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 20:04:37.201315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201378 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.201385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 20:04:37.201410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201436 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.201443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 20:04:37.201496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.201529 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.201536 | orchestrator | 2025-06-02 20:04:37.201543 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-02 20:04:37.201550 | orchestrator | Monday 02 June 2025 20:01:28 +0000 (0:00:00.843) 0:03:07.401 *********** 2025-06-02 20:04:37.201557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:04:37.201564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:04:37.201572 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.201579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:04:37.201586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:04:37.201593 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.201600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:04:37.201607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 20:04:37.201613 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.201620 | orchestrator | 2025-06-02 20:04:37.201627 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-02 20:04:37.201634 | orchestrator | Monday 02 June 2025 20:01:29 +0000 (0:00:00.860) 0:03:08.262 *********** 2025-06-02 20:04:37.201641 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.201648 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.201654 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.201661 | orchestrator | 2025-06-02 20:04:37.201668 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-02 20:04:37.201675 | orchestrator | Monday 02 June 2025 20:01:30 +0000 (0:00:01.641) 0:03:09.903 *********** 2025-06-02 20:04:37.201682 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.201688 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.201695 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.201702 | orchestrator | 2025-06-02 20:04:37.201709 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-02 20:04:37.201715 | orchestrator | Monday 02 June 2025 20:01:32 +0000 (0:00:02.157) 0:03:12.061 *********** 2025-06-02 20:04:37.201722 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.201729 | orchestrator | 2025-06-02 20:04:37.201736 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-02 20:04:37.201783 | orchestrator | Monday 02 June 2025 20:01:34 +0000 (0:00:01.107) 0:03:13.168 *********** 2025-06-02 20:04:37.201792 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:04:37.201798 | orchestrator | 2025-06-02 20:04:37.201805 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-02 20:04:37.201817 | orchestrator | Monday 02 June 2025 20:01:37 +0000 (0:00:03.135) 0:03:16.304 *********** 2025-06-02 20:04:37.201829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:04:37.201837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:04:37.201843 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.201892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:04:37.201910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:04:37.201917 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.201924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:04:37.201932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:04:37.201939 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.201946 | orchestrator | 2025-06-02 20:04:37.201953 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-02 20:04:37.201960 | orchestrator | Monday 02 June 2025 20:01:39 +0000 (0:00:02.416) 0:03:18.720 *********** 2025-06-02 20:04:37.202090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:04:37.202108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:04:37.202115 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.202123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:04:37.202190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:04:37.202199 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.202207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:04:37.202215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 20:04:37.202222 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.202229 | orchestrator | 2025-06-02 20:04:37.202236 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-02 20:04:37.202243 | orchestrator | Monday 02 June 2025 20:01:41 +0000 (0:00:02.058) 0:03:20.778 *********** 2025-06-02 20:04:37.202250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:04:37.202292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:04:37.202300 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.202309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:04:37.202317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:04:37.202324 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.202331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:04:37.202338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 20:04:37.202345 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.202351 | orchestrator | 2025-06-02 20:04:37.202357 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-02 20:04:37.202364 | orchestrator | Monday 02 June 2025 20:01:44 +0000 (0:00:02.485) 0:03:23.264 *********** 2025-06-02 20:04:37.202370 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.202377 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.202389 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.202396 | orchestrator | 2025-06-02 20:04:37.202403 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-02 20:04:37.202409 | orchestrator | Monday 02 June 2025 20:01:46 +0000 (0:00:02.104) 0:03:25.368 *********** 2025-06-02 20:04:37.202416 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.202422 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.202429 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.202435 | orchestrator | 2025-06-02 20:04:37.202441 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-02 20:04:37.202447 | orchestrator | Monday 02 June 2025 20:01:47 +0000 (0:00:01.388) 0:03:26.756 *********** 2025-06-02 20:04:37.202454 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.202461 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.202467 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.202474 | orchestrator | 2025-06-02 20:04:37.202480 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-02 20:04:37.202487 | orchestrator | Monday 02 June 2025 20:01:47 +0000 (0:00:00.297) 0:03:27.053 *********** 2025-06-02 20:04:37.202494 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.202502 | orchestrator | 2025-06-02 20:04:37.202509 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-02 20:04:37.202564 | orchestrator | Monday 02 June 2025 20:01:49 +0000 (0:00:01.081) 0:03:28.135 *********** 2025-06-02 20:04:37.202581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 20:04:37.202589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 20:04:37.202595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 20:04:37.202602 | orchestrator | 2025-06-02 20:04:37.202609 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-02 20:04:37.202620 | orchestrator | Monday 02 June 2025 20:01:50 +0000 (0:00:01.641) 0:03:29.777 *********** 2025-06-02 20:04:37.202627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 20:04:37.202634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 20:04:37.202640 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.202646 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.202701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 20:04:37.202712 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.202718 | orchestrator | 2025-06-02 20:04:37.202724 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-02 20:04:37.202731 | orchestrator | Monday 02 June 2025 20:01:51 +0000 (0:00:00.355) 0:03:30.132 *********** 2025-06-02 20:04:37.202738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 20:04:37.202746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 20:04:37.202752 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.202758 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.202765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 20:04:37.202775 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.202782 | orchestrator | 2025-06-02 20:04:37.202788 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-02 20:04:37.202794 | orchestrator | Monday 02 June 2025 20:01:51 +0000 (0:00:00.512) 0:03:30.645 *********** 2025-06-02 20:04:37.202800 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.202806 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.202812 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.202817 | orchestrator | 2025-06-02 20:04:37.202824 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-02 20:04:37.202830 | orchestrator | Monday 02 June 2025 20:01:52 +0000 (0:00:00.551) 0:03:31.196 *********** 2025-06-02 20:04:37.202836 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.202842 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.202847 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.202853 | orchestrator | 2025-06-02 20:04:37.202859 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-02 20:04:37.202865 | orchestrator | Monday 02 June 2025 20:01:53 +0000 (0:00:01.087) 0:03:32.284 *********** 2025-06-02 20:04:37.202872 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.202879 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.202885 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.202891 | orchestrator | 2025-06-02 20:04:37.202897 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-02 20:04:37.202903 | orchestrator | Monday 02 June 2025 20:01:53 +0000 (0:00:00.277) 0:03:32.561 *********** 2025-06-02 20:04:37.202910 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.202916 | orchestrator | 2025-06-02 20:04:37.202923 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-02 20:04:37.202929 | orchestrator | Monday 02 June 2025 20:01:54 +0000 (0:00:01.386) 0:03:33.948 *********** 2025-06-02 20:04:37.202935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:04:37.203009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:04:37.203070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:04:37.203131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.203256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:04:37.203262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.203419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.203433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.203444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.203544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:04:37.203551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.203557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:04:37.203644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.203736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.203829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.203835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203841 | orchestrator | 2025-06-02 20:04:37.203848 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-02 20:04:37.203854 | orchestrator | Monday 02 June 2025 20:01:59 +0000 (0:00:04.187) 0:03:38.136 *********** 2025-06-02 20:04:37.203860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:04:37.203867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:04:37.203956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.203963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.203971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:04:37.204034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.204050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.204071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:04:37.204162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:04:37.204168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.204175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.204247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.204255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.204262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.204269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.204347 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.204354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:04:37.204369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:04:37.204376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.204389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.204470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.204532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 20:04:37.204545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204560 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.204568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.204575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.204587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.204655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 20:04:37.204670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 20:04:37.204677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 20:04:37.204715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:04:37.204723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.204731 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.204738 | orchestrator | 2025-06-02 20:04:37.204745 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-02 20:04:37.204752 | orchestrator | Monday 02 June 2025 20:02:00 +0000 (0:00:01.582) 0:03:39.719 *********** 2025-06-02 20:04:37.204760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:04:37.204767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:04:37.204774 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.204781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:04:37.204788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:04:37.204800 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.204807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:04:37.204815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 20:04:37.204821 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.204828 | orchestrator | 2025-06-02 20:04:37.204835 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-02 20:04:37.204841 | orchestrator | Monday 02 June 2025 20:02:02 +0000 (0:00:02.003) 0:03:41.722 *********** 2025-06-02 20:04:37.204847 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.204853 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.204859 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.204864 | orchestrator | 2025-06-02 20:04:37.204870 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-02 20:04:37.204876 | orchestrator | Monday 02 June 2025 20:02:03 +0000 (0:00:01.308) 0:03:43.030 *********** 2025-06-02 20:04:37.204882 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.204888 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.204893 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.204899 | orchestrator | 2025-06-02 20:04:37.204905 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-02 20:04:37.204911 | orchestrator | Monday 02 June 2025 20:02:06 +0000 (0:00:02.255) 0:03:45.285 *********** 2025-06-02 20:04:37.204917 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.204923 | orchestrator | 2025-06-02 20:04:37.204931 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-02 20:04:37.205005 | orchestrator | Monday 02 June 2025 20:02:07 +0000 (0:00:01.224) 0:03:46.509 *********** 2025-06-02 20:04:37.205073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.205085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.205092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.205105 | orchestrator | 2025-06-02 20:04:37.205113 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-02 20:04:37.205120 | orchestrator | Monday 02 June 2025 20:02:10 +0000 (0:00:03.384) 0:03:49.894 *********** 2025-06-02 20:04:37.205127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.205134 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.205159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.205168 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.205178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.205191 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.205198 | orchestrator | 2025-06-02 20:04:37.205205 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-02 20:04:37.205212 | orchestrator | Monday 02 June 2025 20:02:11 +0000 (0:00:00.500) 0:03:50.395 *********** 2025-06-02 20:04:37.205219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205234 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.205241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205256 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.205263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205279 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.205287 | orchestrator | 2025-06-02 20:04:37.205295 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-02 20:04:37.205303 | orchestrator | Monday 02 June 2025 20:02:12 +0000 (0:00:00.743) 0:03:51.139 *********** 2025-06-02 20:04:37.205311 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.205319 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.205326 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.205332 | orchestrator | 2025-06-02 20:04:37.205339 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-02 20:04:37.205346 | orchestrator | Monday 02 June 2025 20:02:13 +0000 (0:00:01.633) 0:03:52.773 *********** 2025-06-02 20:04:37.205354 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.205363 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.205371 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.205378 | orchestrator | 2025-06-02 20:04:37.205385 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-02 20:04:37.205392 | orchestrator | Monday 02 June 2025 20:02:15 +0000 (0:00:02.256) 0:03:55.029 *********** 2025-06-02 20:04:37.205399 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.205406 | orchestrator | 2025-06-02 20:04:37.205414 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-02 20:04:37.205420 | orchestrator | Monday 02 June 2025 20:02:17 +0000 (0:00:01.251) 0:03:56.281 *********** 2025-06-02 20:04:37.205453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.205469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.205493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.205549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205564 | orchestrator | 2025-06-02 20:04:37.205572 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-02 20:04:37.205579 | orchestrator | Monday 02 June 2025 20:02:21 +0000 (0:00:04.477) 0:04:00.758 *********** 2025-06-02 20:04:37.205609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.205625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205642 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.205649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.205658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205679 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.205708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.205717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.205731 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.205738 | orchestrator | 2025-06-02 20:04:37.205745 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-02 20:04:37.205751 | orchestrator | Monday 02 June 2025 20:02:22 +0000 (0:00:00.937) 0:04:01.696 *********** 2025-06-02 20:04:37.205759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205792 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.205799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205844 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.205850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 20:04:37.205882 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.205889 | orchestrator | 2025-06-02 20:04:37.205895 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-02 20:04:37.205901 | orchestrator | Monday 02 June 2025 20:02:23 +0000 (0:00:00.960) 0:04:02.656 *********** 2025-06-02 20:04:37.205908 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.205915 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.205921 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.205928 | orchestrator | 2025-06-02 20:04:37.205934 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-02 20:04:37.205941 | orchestrator | Monday 02 June 2025 20:02:25 +0000 (0:00:01.627) 0:04:04.284 *********** 2025-06-02 20:04:37.205947 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.205954 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.205961 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.205967 | orchestrator | 2025-06-02 20:04:37.205974 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-02 20:04:37.205980 | orchestrator | Monday 02 June 2025 20:02:27 +0000 (0:00:02.142) 0:04:06.426 *********** 2025-06-02 20:04:37.205987 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.205994 | orchestrator | 2025-06-02 20:04:37.206000 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-02 20:04:37.206007 | orchestrator | Monday 02 June 2025 20:02:28 +0000 (0:00:01.573) 0:04:07.999 *********** 2025-06-02 20:04:37.206014 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-02 20:04:37.206090 | orchestrator | 2025-06-02 20:04:37.206105 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-02 20:04:37.206112 | orchestrator | Monday 02 June 2025 20:02:30 +0000 (0:00:01.141) 0:04:09.140 *********** 2025-06-02 20:04:37.206119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 20:04:37.206128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 20:04:37.206135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 20:04:37.206143 | orchestrator | 2025-06-02 20:04:37.206150 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-02 20:04:37.206184 | orchestrator | Monday 02 June 2025 20:02:34 +0000 (0:00:04.112) 0:04:13.252 *********** 2025-06-02 20:04:37.206197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:04:37.206203 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.206210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:04:37.206216 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.206223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:04:37.206230 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.206236 | orchestrator | 2025-06-02 20:04:37.206243 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-02 20:04:37.206249 | orchestrator | Monday 02 June 2025 20:02:35 +0000 (0:00:01.491) 0:04:14.744 *********** 2025-06-02 20:04:37.206260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:04:37.206267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:04:37.206274 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.206280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:04:37.206287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:04:37.206293 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.206300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:04:37.206306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 20:04:37.206312 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.206318 | orchestrator | 2025-06-02 20:04:37.206325 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 20:04:37.206331 | orchestrator | Monday 02 June 2025 20:02:37 +0000 (0:00:01.873) 0:04:16.618 *********** 2025-06-02 20:04:37.206337 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.206344 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.206350 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.206356 | orchestrator | 2025-06-02 20:04:37.206362 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 20:04:37.206369 | orchestrator | Monday 02 June 2025 20:02:39 +0000 (0:00:02.394) 0:04:19.013 *********** 2025-06-02 20:04:37.206375 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.206381 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.206387 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.206393 | orchestrator | 2025-06-02 20:04:37.206400 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-02 20:04:37.206433 | orchestrator | Monday 02 June 2025 20:02:42 +0000 (0:00:03.022) 0:04:22.035 *********** 2025-06-02 20:04:37.206441 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-02 20:04:37.206449 | orchestrator | 2025-06-02 20:04:37.206455 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-02 20:04:37.206462 | orchestrator | Monday 02 June 2025 20:02:43 +0000 (0:00:00.822) 0:04:22.858 *********** 2025-06-02 20:04:37.206473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:04:37.206487 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.206494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:04:37.206501 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.206508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:04:37.206515 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.206523 | orchestrator | 2025-06-02 20:04:37.206530 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-02 20:04:37.206537 | orchestrator | Monday 02 June 2025 20:02:45 +0000 (0:00:01.329) 0:04:24.187 *********** 2025-06-02 20:04:37.206544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:04:37.206551 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.206559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:04:37.206566 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.206573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 20:04:37.206580 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.206586 | orchestrator | 2025-06-02 20:04:37.206594 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-02 20:04:37.206621 | orchestrator | Monday 02 June 2025 20:02:46 +0000 (0:00:01.606) 0:04:25.794 *********** 2025-06-02 20:04:37.206629 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.206637 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.206644 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.206651 | orchestrator | 2025-06-02 20:04:37.206657 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 20:04:37.206670 | orchestrator | Monday 02 June 2025 20:02:47 +0000 (0:00:01.193) 0:04:26.987 *********** 2025-06-02 20:04:37.206677 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.206684 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.206691 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.206698 | orchestrator | 2025-06-02 20:04:37.206708 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 20:04:37.206715 | orchestrator | Monday 02 June 2025 20:02:50 +0000 (0:00:02.446) 0:04:29.434 *********** 2025-06-02 20:04:37.206722 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.206729 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.206736 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.206743 | orchestrator | 2025-06-02 20:04:37.206750 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-02 20:04:37.206757 | orchestrator | Monday 02 June 2025 20:02:53 +0000 (0:00:02.754) 0:04:32.189 *********** 2025-06-02 20:04:37.206764 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-02 20:04:37.206771 | orchestrator | 2025-06-02 20:04:37.206778 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-02 20:04:37.206785 | orchestrator | Monday 02 June 2025 20:02:53 +0000 (0:00:00.911) 0:04:33.100 *********** 2025-06-02 20:04:37.206793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:04:37.206800 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.206807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:04:37.206815 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.206822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:04:37.206829 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.206835 | orchestrator | 2025-06-02 20:04:37.206842 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-02 20:04:37.206848 | orchestrator | Monday 02 June 2025 20:02:54 +0000 (0:00:00.983) 0:04:34.083 *********** 2025-06-02 20:04:37.206854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:04:37.206865 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.206893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:04:37.206902 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.206912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 20:04:37.206920 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.206927 | orchestrator | 2025-06-02 20:04:37.206934 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-02 20:04:37.206941 | orchestrator | Monday 02 June 2025 20:02:56 +0000 (0:00:01.197) 0:04:35.281 *********** 2025-06-02 20:04:37.206948 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.206955 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.206962 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.206969 | orchestrator | 2025-06-02 20:04:37.206977 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 20:04:37.206984 | orchestrator | Monday 02 June 2025 20:02:57 +0000 (0:00:01.599) 0:04:36.881 *********** 2025-06-02 20:04:37.206990 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.206997 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.207005 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.207012 | orchestrator | 2025-06-02 20:04:37.207039 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 20:04:37.207045 | orchestrator | Monday 02 June 2025 20:02:59 +0000 (0:00:02.129) 0:04:39.010 *********** 2025-06-02 20:04:37.207052 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.207057 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.207063 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.207070 | orchestrator | 2025-06-02 20:04:37.207077 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-02 20:04:37.207084 | orchestrator | Monday 02 June 2025 20:03:02 +0000 (0:00:02.855) 0:04:41.866 *********** 2025-06-02 20:04:37.207091 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.207098 | orchestrator | 2025-06-02 20:04:37.207105 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-02 20:04:37.207112 | orchestrator | Monday 02 June 2025 20:03:03 +0000 (0:00:01.232) 0:04:43.099 *********** 2025-06-02 20:04:37.207120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.207134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:04:37.207142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.207191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.207197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:04:37.207209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.207246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:04:37.207262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.207269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.207296 | orchestrator | 2025-06-02 20:04:37.207303 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-02 20:04:37.207311 | orchestrator | Monday 02 June 2025 20:03:07 +0000 (0:00:03.522) 0:04:46.621 *********** 2025-06-02 20:04:37.207343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.207353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:04:37.207360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.207387 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.207411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.207422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:04:37.207429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.207453 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.207460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.207468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 20:04:37.207494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 20:04:37.207512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:04:37.207524 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.207531 | orchestrator | 2025-06-02 20:04:37.207537 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-02 20:04:37.207545 | orchestrator | Monday 02 June 2025 20:03:08 +0000 (0:00:00.676) 0:04:47.297 *********** 2025-06-02 20:04:37.207552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:04:37.207559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:04:37.207567 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.207573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:04:37.207579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:04:37.207586 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.207593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:04:37.207600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 20:04:37.207607 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.207613 | orchestrator | 2025-06-02 20:04:37.207620 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-02 20:04:37.207626 | orchestrator | Monday 02 June 2025 20:03:08 +0000 (0:00:00.765) 0:04:48.062 *********** 2025-06-02 20:04:37.207633 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.207639 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.207646 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.207653 | orchestrator | 2025-06-02 20:04:37.207659 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-02 20:04:37.207666 | orchestrator | Monday 02 June 2025 20:03:10 +0000 (0:00:01.541) 0:04:49.604 *********** 2025-06-02 20:04:37.207672 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.207679 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.207686 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.207692 | orchestrator | 2025-06-02 20:04:37.207699 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-02 20:04:37.207706 | orchestrator | Monday 02 June 2025 20:03:12 +0000 (0:00:01.834) 0:04:51.438 *********** 2025-06-02 20:04:37.207734 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.207742 | orchestrator | 2025-06-02 20:04:37.207749 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-02 20:04:37.207755 | orchestrator | Monday 02 June 2025 20:03:13 +0000 (0:00:01.257) 0:04:52.695 *********** 2025-06-02 20:04:37.207767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:04:37.207780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:04:37.207787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:04:37.207794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:04:37.207823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:04:37.207835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:04:37.207842 | orchestrator | 2025-06-02 20:04:37.207848 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-02 20:04:37.207854 | orchestrator | Monday 02 June 2025 20:03:18 +0000 (0:00:05.058) 0:04:57.754 *********** 2025-06-02 20:04:37.207860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:04:37.207866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:04:37.207872 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.207901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:04:37.207913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:04:37.207919 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.207926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:04:37.207933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:04:37.207940 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.207947 | orchestrator | 2025-06-02 20:04:37.207954 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-02 20:04:37.207961 | orchestrator | Monday 02 June 2025 20:03:19 +0000 (0:00:00.976) 0:04:58.731 *********** 2025-06-02 20:04:37.207986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 20:04:37.207999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:04:37.208010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:04:37.208035 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.208043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 20:04:37.208051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:04:37.208059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:04:37.208067 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.208074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 20:04:37.208082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:04:37.208089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 20:04:37.208096 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.208103 | orchestrator | 2025-06-02 20:04:37.208111 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-02 20:04:37.208118 | orchestrator | Monday 02 June 2025 20:03:20 +0000 (0:00:00.772) 0:04:59.504 *********** 2025-06-02 20:04:37.208125 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.208132 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.208139 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.208146 | orchestrator | 2025-06-02 20:04:37.208153 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-02 20:04:37.208160 | orchestrator | Monday 02 June 2025 20:03:20 +0000 (0:00:00.381) 0:04:59.885 *********** 2025-06-02 20:04:37.208166 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.208173 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.208179 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.208186 | orchestrator | 2025-06-02 20:04:37.208193 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-02 20:04:37.208200 | orchestrator | Monday 02 June 2025 20:03:21 +0000 (0:00:01.142) 0:05:01.028 *********** 2025-06-02 20:04:37.208207 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.208214 | orchestrator | 2025-06-02 20:04:37.208221 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-02 20:04:37.208228 | orchestrator | Monday 02 June 2025 20:03:23 +0000 (0:00:01.505) 0:05:02.534 *********** 2025-06-02 20:04:37.208237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:04:37.208276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:04:37.208294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:04:37.208314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:04:37.208333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:04:37.208385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:04:37.208392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:04:37.208451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:04:37.208459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:04:37.208466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:04:37.208489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:04:37.208546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:04:37.208556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208576 | orchestrator | 2025-06-02 20:04:37.208584 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-02 20:04:37.208591 | orchestrator | Monday 02 June 2025 20:03:27 +0000 (0:00:04.030) 0:05:06.564 *********** 2025-06-02 20:04:37.208597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 20:04:37.208611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:04:37.208619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 20:04:37.208657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:04:37.208672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208697 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.208708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 20:04:37.208716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:04:37.208723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 20:04:37.208766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:04:37.208774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 20:04:37.208804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208811 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.208818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:04:37.208829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 20:04:37.208864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 20:04:37.208872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:04:37.208890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:04:37.208898 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.208905 | orchestrator | 2025-06-02 20:04:37.208911 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-02 20:04:37.208918 | orchestrator | Monday 02 June 2025 20:03:28 +0000 (0:00:01.467) 0:05:08.031 *********** 2025-06-02 20:04:37.208925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 20:04:37.208932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 20:04:37.208940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:04:37.208952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:04:37.208959 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.208966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 20:04:37.208972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 20:04:37.208999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:04:37.209007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:04:37.209013 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.209039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 20:04:37.209046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 20:04:37.209053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:04:37.209060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 20:04:37.209066 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.209073 | orchestrator | 2025-06-02 20:04:37.209079 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-02 20:04:37.209085 | orchestrator | Monday 02 June 2025 20:03:29 +0000 (0:00:01.007) 0:05:09.039 *********** 2025-06-02 20:04:37.209095 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.209102 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.209112 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.209119 | orchestrator | 2025-06-02 20:04:37.209126 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-02 20:04:37.209133 | orchestrator | Monday 02 June 2025 20:03:30 +0000 (0:00:00.406) 0:05:09.446 *********** 2025-06-02 20:04:37.209139 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.209146 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.209152 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.209158 | orchestrator | 2025-06-02 20:04:37.209165 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-02 20:04:37.209175 | orchestrator | Monday 02 June 2025 20:03:32 +0000 (0:00:01.674) 0:05:11.121 *********** 2025-06-02 20:04:37.209187 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.209194 | orchestrator | 2025-06-02 20:04:37.209201 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-02 20:04:37.209208 | orchestrator | Monday 02 June 2025 20:03:33 +0000 (0:00:01.722) 0:05:12.843 *********** 2025-06-02 20:04:37.209215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:04:37.209224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:04:37.209232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 20:04:37.209239 | orchestrator | 2025-06-02 20:04:37.209246 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-02 20:04:37.209253 | orchestrator | Monday 02 June 2025 20:03:36 +0000 (0:00:02.628) 0:05:15.471 *********** 2025-06-02 20:04:37.209268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 20:04:37.209281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 20:04:37.209288 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.209294 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.209301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 20:04:37.209308 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.209314 | orchestrator | 2025-06-02 20:04:37.209320 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-02 20:04:37.209328 | orchestrator | Monday 02 June 2025 20:03:36 +0000 (0:00:00.382) 0:05:15.854 *********** 2025-06-02 20:04:37.209335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 20:04:37.209342 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.209349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 20:04:37.209356 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.209363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 20:04:37.209370 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.209377 | orchestrator | 2025-06-02 20:04:37.209384 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-02 20:04:37.209396 | orchestrator | Monday 02 June 2025 20:03:37 +0000 (0:00:01.048) 0:05:16.903 *********** 2025-06-02 20:04:37.209403 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.209410 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.209417 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.209424 | orchestrator | 2025-06-02 20:04:37.209431 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-02 20:04:37.209442 | orchestrator | Monday 02 June 2025 20:03:38 +0000 (0:00:00.450) 0:05:17.353 *********** 2025-06-02 20:04:37.209449 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.209456 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.209463 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.209470 | orchestrator | 2025-06-02 20:04:37.209477 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-02 20:04:37.209484 | orchestrator | Monday 02 June 2025 20:03:39 +0000 (0:00:01.338) 0:05:18.692 *********** 2025-06-02 20:04:37.209492 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:04:37.209499 | orchestrator | 2025-06-02 20:04:37.209510 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-02 20:04:37.209517 | orchestrator | Monday 02 June 2025 20:03:41 +0000 (0:00:01.728) 0:05:20.421 *********** 2025-06-02 20:04:37.209525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.209532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.209540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.209556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.209568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.209576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 20:04:37.209584 | orchestrator | 2025-06-02 20:04:37.209591 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-02 20:04:37.209598 | orchestrator | Monday 02 June 2025 20:03:47 +0000 (0:00:05.931) 0:05:26.352 *********** 2025-06-02 20:04:37.209605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.209621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.209628 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.209639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.209646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.209653 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.209661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.209676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 20:04:37.209684 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.209691 | orchestrator | 2025-06-02 20:04:37.209698 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-02 20:04:37.209707 | orchestrator | Monday 02 June 2025 20:03:47 +0000 (0:00:00.611) 0:05:26.964 *********** 2025-06-02 20:04:37.209713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209745 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.209752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209781 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.209789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 20:04:37.209822 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.209830 | orchestrator | 2025-06-02 20:04:37.209836 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-02 20:04:37.209842 | orchestrator | Monday 02 June 2025 20:03:49 +0000 (0:00:01.580) 0:05:28.544 *********** 2025-06-02 20:04:37.209847 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.209853 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.209859 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.209867 | orchestrator | 2025-06-02 20:04:37.209874 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-02 20:04:37.209881 | orchestrator | Monday 02 June 2025 20:03:50 +0000 (0:00:01.339) 0:05:29.884 *********** 2025-06-02 20:04:37.209889 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.209896 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.209903 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.209910 | orchestrator | 2025-06-02 20:04:37.209917 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-02 20:04:37.209923 | orchestrator | Monday 02 June 2025 20:03:52 +0000 (0:00:02.114) 0:05:31.999 *********** 2025-06-02 20:04:37.209929 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.209937 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.209944 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.209951 | orchestrator | 2025-06-02 20:04:37.209958 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-02 20:04:37.209965 | orchestrator | Monday 02 June 2025 20:03:53 +0000 (0:00:00.329) 0:05:32.328 *********** 2025-06-02 20:04:37.209973 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.209980 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.209987 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.209994 | orchestrator | 2025-06-02 20:04:37.210001 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-02 20:04:37.210012 | orchestrator | Monday 02 June 2025 20:03:53 +0000 (0:00:00.282) 0:05:32.611 *********** 2025-06-02 20:04:37.210096 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.210104 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.210112 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.210119 | orchestrator | 2025-06-02 20:04:37.210127 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-02 20:04:37.210134 | orchestrator | Monday 02 June 2025 20:03:54 +0000 (0:00:00.644) 0:05:33.255 *********** 2025-06-02 20:04:37.210142 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.210149 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.210157 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.210165 | orchestrator | 2025-06-02 20:04:37.210172 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-02 20:04:37.210184 | orchestrator | Monday 02 June 2025 20:03:54 +0000 (0:00:00.314) 0:05:33.570 *********** 2025-06-02 20:04:37.210192 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.210200 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.210207 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.210215 | orchestrator | 2025-06-02 20:04:37.210222 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-02 20:04:37.210230 | orchestrator | Monday 02 June 2025 20:03:54 +0000 (0:00:00.308) 0:05:33.878 *********** 2025-06-02 20:04:37.210237 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.210245 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.210259 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.210267 | orchestrator | 2025-06-02 20:04:37.210275 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-02 20:04:37.210283 | orchestrator | Monday 02 June 2025 20:03:55 +0000 (0:00:00.849) 0:05:34.727 *********** 2025-06-02 20:04:37.210290 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.210299 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.210307 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.210315 | orchestrator | 2025-06-02 20:04:37.210323 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-02 20:04:37.210330 | orchestrator | Monday 02 June 2025 20:03:56 +0000 (0:00:00.710) 0:05:35.437 *********** 2025-06-02 20:04:37.210338 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.210345 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.210353 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.210361 | orchestrator | 2025-06-02 20:04:37.210369 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-02 20:04:37.210377 | orchestrator | Monday 02 June 2025 20:03:56 +0000 (0:00:00.350) 0:05:35.788 *********** 2025-06-02 20:04:37.210384 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.210392 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.210400 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.210408 | orchestrator | 2025-06-02 20:04:37.210416 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-02 20:04:37.210423 | orchestrator | Monday 02 June 2025 20:03:57 +0000 (0:00:00.797) 0:05:36.585 *********** 2025-06-02 20:04:37.210431 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.210439 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.210447 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.210454 | orchestrator | 2025-06-02 20:04:37.210463 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-02 20:04:37.210470 | orchestrator | Monday 02 June 2025 20:03:58 +0000 (0:00:01.293) 0:05:37.879 *********** 2025-06-02 20:04:37.210478 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.210486 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.210494 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.210502 | orchestrator | 2025-06-02 20:04:37.210509 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-02 20:04:37.210518 | orchestrator | Monday 02 June 2025 20:03:59 +0000 (0:00:00.919) 0:05:38.798 *********** 2025-06-02 20:04:37.210525 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.210533 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.210542 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.210550 | orchestrator | 2025-06-02 20:04:37.210557 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-02 20:04:37.210565 | orchestrator | Monday 02 June 2025 20:04:07 +0000 (0:00:08.261) 0:05:47.059 *********** 2025-06-02 20:04:37.210573 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.210581 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.210588 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.210595 | orchestrator | 2025-06-02 20:04:37.210602 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-02 20:04:37.210609 | orchestrator | Monday 02 June 2025 20:04:08 +0000 (0:00:00.799) 0:05:47.859 *********** 2025-06-02 20:04:37.210617 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.210624 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.210631 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.210638 | orchestrator | 2025-06-02 20:04:37.210646 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-02 20:04:37.210653 | orchestrator | Monday 02 June 2025 20:04:17 +0000 (0:00:08.434) 0:05:56.293 *********** 2025-06-02 20:04:37.210661 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.210668 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.210676 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.210683 | orchestrator | 2025-06-02 20:04:37.210691 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-02 20:04:37.210704 | orchestrator | Monday 02 June 2025 20:04:20 +0000 (0:00:03.708) 0:06:00.001 *********** 2025-06-02 20:04:37.210711 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:04:37.210719 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:04:37.210727 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:04:37.210734 | orchestrator | 2025-06-02 20:04:37.210742 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-02 20:04:37.210750 | orchestrator | Monday 02 June 2025 20:04:28 +0000 (0:00:08.103) 0:06:08.105 *********** 2025-06-02 20:04:37.210758 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.210765 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.210772 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.210779 | orchestrator | 2025-06-02 20:04:37.210787 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-02 20:04:37.210794 | orchestrator | Monday 02 June 2025 20:04:29 +0000 (0:00:00.338) 0:06:08.444 *********** 2025-06-02 20:04:37.210801 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.210816 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.210824 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.210831 | orchestrator | 2025-06-02 20:04:37.210837 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-02 20:04:37.210844 | orchestrator | Monday 02 June 2025 20:04:30 +0000 (0:00:00.734) 0:06:09.178 *********** 2025-06-02 20:04:37.210850 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.210857 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.210864 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.210871 | orchestrator | 2025-06-02 20:04:37.210879 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-02 20:04:37.210886 | orchestrator | Monday 02 June 2025 20:04:30 +0000 (0:00:00.347) 0:06:09.525 *********** 2025-06-02 20:04:37.210899 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.210906 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.210913 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.210920 | orchestrator | 2025-06-02 20:04:37.210928 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-02 20:04:37.210935 | orchestrator | Monday 02 June 2025 20:04:30 +0000 (0:00:00.337) 0:06:09.862 *********** 2025-06-02 20:04:37.210942 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.210950 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.210957 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.210965 | orchestrator | 2025-06-02 20:04:37.210972 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-02 20:04:37.210979 | orchestrator | Monday 02 June 2025 20:04:31 +0000 (0:00:00.352) 0:06:10.215 *********** 2025-06-02 20:04:37.210986 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:04:37.210994 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:04:37.211001 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:04:37.211008 | orchestrator | 2025-06-02 20:04:37.211037 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-02 20:04:37.211044 | orchestrator | Monday 02 June 2025 20:04:31 +0000 (0:00:00.686) 0:06:10.902 *********** 2025-06-02 20:04:37.211050 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.211057 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.211063 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.211070 | orchestrator | 2025-06-02 20:04:37.211077 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-02 20:04:37.211084 | orchestrator | Monday 02 June 2025 20:04:32 +0000 (0:00:00.878) 0:06:11.781 *********** 2025-06-02 20:04:37.211091 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:04:37.211097 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:04:37.211104 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:04:37.211111 | orchestrator | 2025-06-02 20:04:37.211117 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:04:37.211132 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 20:04:37.211140 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 20:04:37.211146 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 20:04:37.211153 | orchestrator | 2025-06-02 20:04:37.211160 | orchestrator | 2025-06-02 20:04:37.211166 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:04:37.211173 | orchestrator | Monday 02 June 2025 20:04:33 +0000 (0:00:00.826) 0:06:12.607 *********** 2025-06-02 20:04:37.211180 | orchestrator | =============================================================================== 2025-06-02 20:04:37.211187 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.43s 2025-06-02 20:04:37.211193 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.26s 2025-06-02 20:04:37.211200 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.10s 2025-06-02 20:04:37.211207 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.93s 2025-06-02 20:04:37.211213 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.69s 2025-06-02 20:04:37.211220 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.17s 2025-06-02 20:04:37.211227 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.06s 2025-06-02 20:04:37.211233 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.79s 2025-06-02 20:04:37.211240 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.75s 2025-06-02 20:04:37.211247 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.61s 2025-06-02 20:04:37.211253 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.50s 2025-06-02 20:04:37.211260 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.48s 2025-06-02 20:04:37.211267 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.19s 2025-06-02 20:04:37.211273 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.11s 2025-06-02 20:04:37.211280 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.03s 2025-06-02 20:04:37.211287 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.94s 2025-06-02 20:04:37.211293 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.93s 2025-06-02 20:04:37.211300 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.90s 2025-06-02 20:04:37.211307 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.71s 2025-06-02 20:04:37.211313 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.62s 2025-06-02 20:04:37.211326 | orchestrator | 2025-06-02 20:04:37 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:04:37.211334 | orchestrator | 2025-06-02 20:04:37 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:04:37.211341 | orchestrator | 2025-06-02 20:04:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:40.255252 | orchestrator | 2025-06-02 20:04:40 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:40.255357 | orchestrator | 2025-06-02 20:04:40 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:04:40.256512 | orchestrator | 2025-06-02 20:04:40 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:04:40.257189 | orchestrator | 2025-06-02 20:04:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:43.305279 | orchestrator | 2025-06-02 20:04:43 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:43.305834 | orchestrator | 2025-06-02 20:04:43 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:04:43.307662 | orchestrator | 2025-06-02 20:04:43 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:04:43.307728 | orchestrator | 2025-06-02 20:04:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:46.345621 | orchestrator | 2025-06-02 20:04:46 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:46.345909 | orchestrator | 2025-06-02 20:04:46 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:04:46.346647 | orchestrator | 2025-06-02 20:04:46 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:04:46.346914 | orchestrator | 2025-06-02 20:04:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:49.375475 | orchestrator | 2025-06-02 20:04:49 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:49.375796 | orchestrator | 2025-06-02 20:04:49 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:04:49.376219 | orchestrator | 2025-06-02 20:04:49 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:04:49.376255 | orchestrator | 2025-06-02 20:04:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:52.413066 | orchestrator | 2025-06-02 20:04:52 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:52.413757 | orchestrator | 2025-06-02 20:04:52 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:04:52.414209 | orchestrator | 2025-06-02 20:04:52 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:04:52.414245 | orchestrator | 2025-06-02 20:04:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:55.444249 | orchestrator | 2025-06-02 20:04:55 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:55.444960 | orchestrator | 2025-06-02 20:04:55 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:04:55.448559 | orchestrator | 2025-06-02 20:04:55 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:04:55.448650 | orchestrator | 2025-06-02 20:04:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:04:58.491759 | orchestrator | 2025-06-02 20:04:58 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:04:58.491834 | orchestrator | 2025-06-02 20:04:58 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:04:58.492428 | orchestrator | 2025-06-02 20:04:58 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:04:58.492440 | orchestrator | 2025-06-02 20:04:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:01.537210 | orchestrator | 2025-06-02 20:05:01 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:01.542407 | orchestrator | 2025-06-02 20:05:01 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:01.545364 | orchestrator | 2025-06-02 20:05:01 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:01.546463 | orchestrator | 2025-06-02 20:05:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:04.595384 | orchestrator | 2025-06-02 20:05:04 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:04.597374 | orchestrator | 2025-06-02 20:05:04 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:04.599262 | orchestrator | 2025-06-02 20:05:04 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:04.599325 | orchestrator | 2025-06-02 20:05:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:07.639669 | orchestrator | 2025-06-02 20:05:07 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:07.641370 | orchestrator | 2025-06-02 20:05:07 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:07.643368 | orchestrator | 2025-06-02 20:05:07 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:07.644155 | orchestrator | 2025-06-02 20:05:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:10.709440 | orchestrator | 2025-06-02 20:05:10 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:10.712528 | orchestrator | 2025-06-02 20:05:10 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:10.715534 | orchestrator | 2025-06-02 20:05:10 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:10.715837 | orchestrator | 2025-06-02 20:05:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:13.755249 | orchestrator | 2025-06-02 20:05:13 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:13.756387 | orchestrator | 2025-06-02 20:05:13 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:13.759072 | orchestrator | 2025-06-02 20:05:13 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:13.759113 | orchestrator | 2025-06-02 20:05:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:16.803590 | orchestrator | 2025-06-02 20:05:16 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:16.805301 | orchestrator | 2025-06-02 20:05:16 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:16.807700 | orchestrator | 2025-06-02 20:05:16 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:16.807744 | orchestrator | 2025-06-02 20:05:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:19.854260 | orchestrator | 2025-06-02 20:05:19 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:19.857025 | orchestrator | 2025-06-02 20:05:19 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:19.859294 | orchestrator | 2025-06-02 20:05:19 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:19.859327 | orchestrator | 2025-06-02 20:05:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:22.899768 | orchestrator | 2025-06-02 20:05:22 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:22.901331 | orchestrator | 2025-06-02 20:05:22 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:22.902452 | orchestrator | 2025-06-02 20:05:22 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:22.902662 | orchestrator | 2025-06-02 20:05:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:25.954342 | orchestrator | 2025-06-02 20:05:25 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:25.954473 | orchestrator | 2025-06-02 20:05:25 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:25.954489 | orchestrator | 2025-06-02 20:05:25 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:25.954501 | orchestrator | 2025-06-02 20:05:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:29.000274 | orchestrator | 2025-06-02 20:05:28 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:29.001408 | orchestrator | 2025-06-02 20:05:28 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:29.003260 | orchestrator | 2025-06-02 20:05:29 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:29.003313 | orchestrator | 2025-06-02 20:05:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:32.044609 | orchestrator | 2025-06-02 20:05:32 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:32.046564 | orchestrator | 2025-06-02 20:05:32 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:32.048547 | orchestrator | 2025-06-02 20:05:32 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:32.050068 | orchestrator | 2025-06-02 20:05:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:35.091628 | orchestrator | 2025-06-02 20:05:35 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:35.092138 | orchestrator | 2025-06-02 20:05:35 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:35.093303 | orchestrator | 2025-06-02 20:05:35 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:35.093421 | orchestrator | 2025-06-02 20:05:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:38.144137 | orchestrator | 2025-06-02 20:05:38 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:38.144630 | orchestrator | 2025-06-02 20:05:38 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:38.147227 | orchestrator | 2025-06-02 20:05:38 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:38.147283 | orchestrator | 2025-06-02 20:05:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:41.201504 | orchestrator | 2025-06-02 20:05:41 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:41.203119 | orchestrator | 2025-06-02 20:05:41 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:41.204598 | orchestrator | 2025-06-02 20:05:41 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:41.204688 | orchestrator | 2025-06-02 20:05:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:44.254145 | orchestrator | 2025-06-02 20:05:44 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:44.255301 | orchestrator | 2025-06-02 20:05:44 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:44.256319 | orchestrator | 2025-06-02 20:05:44 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:44.256371 | orchestrator | 2025-06-02 20:05:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:47.301399 | orchestrator | 2025-06-02 20:05:47 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:47.303256 | orchestrator | 2025-06-02 20:05:47 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:47.304098 | orchestrator | 2025-06-02 20:05:47 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:47.304158 | orchestrator | 2025-06-02 20:05:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:50.355300 | orchestrator | 2025-06-02 20:05:50 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:50.358556 | orchestrator | 2025-06-02 20:05:50 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:50.361385 | orchestrator | 2025-06-02 20:05:50 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:50.361843 | orchestrator | 2025-06-02 20:05:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:53.416403 | orchestrator | 2025-06-02 20:05:53 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:53.416523 | orchestrator | 2025-06-02 20:05:53 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:53.418175 | orchestrator | 2025-06-02 20:05:53 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:53.418234 | orchestrator | 2025-06-02 20:05:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:56.479381 | orchestrator | 2025-06-02 20:05:56 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:56.481851 | orchestrator | 2025-06-02 20:05:56 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:56.484430 | orchestrator | 2025-06-02 20:05:56 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:56.484561 | orchestrator | 2025-06-02 20:05:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:05:59.532483 | orchestrator | 2025-06-02 20:05:59 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:05:59.534279 | orchestrator | 2025-06-02 20:05:59 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:05:59.536779 | orchestrator | 2025-06-02 20:05:59 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:05:59.537813 | orchestrator | 2025-06-02 20:05:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:02.582458 | orchestrator | 2025-06-02 20:06:02 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:02.584484 | orchestrator | 2025-06-02 20:06:02 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:02.587241 | orchestrator | 2025-06-02 20:06:02 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:02.587285 | orchestrator | 2025-06-02 20:06:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:05.629610 | orchestrator | 2025-06-02 20:06:05 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:05.638490 | orchestrator | 2025-06-02 20:06:05 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:05.641671 | orchestrator | 2025-06-02 20:06:05 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:05.641735 | orchestrator | 2025-06-02 20:06:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:08.686433 | orchestrator | 2025-06-02 20:06:08 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:08.688241 | orchestrator | 2025-06-02 20:06:08 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:08.689593 | orchestrator | 2025-06-02 20:06:08 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:08.689798 | orchestrator | 2025-06-02 20:06:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:11.734965 | orchestrator | 2025-06-02 20:06:11 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:11.736462 | orchestrator | 2025-06-02 20:06:11 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:11.739117 | orchestrator | 2025-06-02 20:06:11 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:11.739179 | orchestrator | 2025-06-02 20:06:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:14.782694 | orchestrator | 2025-06-02 20:06:14 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:14.784407 | orchestrator | 2025-06-02 20:06:14 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:14.785718 | orchestrator | 2025-06-02 20:06:14 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:14.785789 | orchestrator | 2025-06-02 20:06:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:17.830664 | orchestrator | 2025-06-02 20:06:17 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:17.832140 | orchestrator | 2025-06-02 20:06:17 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:17.834534 | orchestrator | 2025-06-02 20:06:17 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:17.834577 | orchestrator | 2025-06-02 20:06:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:20.876860 | orchestrator | 2025-06-02 20:06:20 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:20.878318 | orchestrator | 2025-06-02 20:06:20 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:20.880874 | orchestrator | 2025-06-02 20:06:20 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:20.881043 | orchestrator | 2025-06-02 20:06:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:23.924844 | orchestrator | 2025-06-02 20:06:23 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:23.926458 | orchestrator | 2025-06-02 20:06:23 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:23.929874 | orchestrator | 2025-06-02 20:06:23 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:23.929979 | orchestrator | 2025-06-02 20:06:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:26.971731 | orchestrator | 2025-06-02 20:06:26 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:26.974266 | orchestrator | 2025-06-02 20:06:26 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:26.978136 | orchestrator | 2025-06-02 20:06:26 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:26.978189 | orchestrator | 2025-06-02 20:06:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:30.030837 | orchestrator | 2025-06-02 20:06:30 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:30.035442 | orchestrator | 2025-06-02 20:06:30 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:30.037404 | orchestrator | 2025-06-02 20:06:30 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:30.037511 | orchestrator | 2025-06-02 20:06:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:33.083211 | orchestrator | 2025-06-02 20:06:33 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:33.085520 | orchestrator | 2025-06-02 20:06:33 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:33.089633 | orchestrator | 2025-06-02 20:06:33 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:33.090093 | orchestrator | 2025-06-02 20:06:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:36.142685 | orchestrator | 2025-06-02 20:06:36 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:36.147373 | orchestrator | 2025-06-02 20:06:36 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:36.150967 | orchestrator | 2025-06-02 20:06:36 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:36.151050 | orchestrator | 2025-06-02 20:06:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:39.195035 | orchestrator | 2025-06-02 20:06:39 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:39.195172 | orchestrator | 2025-06-02 20:06:39 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:39.196230 | orchestrator | 2025-06-02 20:06:39 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:39.196284 | orchestrator | 2025-06-02 20:06:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:42.247553 | orchestrator | 2025-06-02 20:06:42 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:42.249660 | orchestrator | 2025-06-02 20:06:42 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:42.252238 | orchestrator | 2025-06-02 20:06:42 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:42.253069 | orchestrator | 2025-06-02 20:06:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:45.303253 | orchestrator | 2025-06-02 20:06:45 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:45.304401 | orchestrator | 2025-06-02 20:06:45 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:45.306319 | orchestrator | 2025-06-02 20:06:45 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:45.306363 | orchestrator | 2025-06-02 20:06:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:48.353376 | orchestrator | 2025-06-02 20:06:48 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state STARTED 2025-06-02 20:06:48.354491 | orchestrator | 2025-06-02 20:06:48 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:48.355487 | orchestrator | 2025-06-02 20:06:48 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:48.355761 | orchestrator | 2025-06-02 20:06:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:51.422260 | orchestrator | 2025-06-02 20:06:51.422405 | orchestrator | 2025-06-02 20:06:51.422632 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-02 20:06:51.422662 | orchestrator | 2025-06-02 20:06:51.422679 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 20:06:51.422696 | orchestrator | Monday 02 June 2025 19:55:50 +0000 (0:00:00.755) 0:00:00.755 *********** 2025-06-02 20:06:51.422718 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.422819 | orchestrator | 2025-06-02 20:06:51.422840 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 20:06:51.422856 | orchestrator | Monday 02 June 2025 19:55:51 +0000 (0:00:01.173) 0:00:01.928 *********** 2025-06-02 20:06:51.422874 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.422921 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.422937 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.422954 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.422965 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.422975 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.422984 | orchestrator | 2025-06-02 20:06:51.422994 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 20:06:51.423005 | orchestrator | Monday 02 June 2025 19:55:53 +0000 (0:00:01.587) 0:00:03.516 *********** 2025-06-02 20:06:51.423021 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.423045 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.423062 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.423094 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.423112 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.423127 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.423143 | orchestrator | 2025-06-02 20:06:51.423159 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 20:06:51.423175 | orchestrator | Monday 02 June 2025 19:55:54 +0000 (0:00:00.823) 0:00:04.339 *********** 2025-06-02 20:06:51.423192 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.423209 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.423225 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.423241 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.423259 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.423276 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.423293 | orchestrator | 2025-06-02 20:06:51.423310 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 20:06:51.423326 | orchestrator | Monday 02 June 2025 19:55:55 +0000 (0:00:00.974) 0:00:05.314 *********** 2025-06-02 20:06:51.423345 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.423589 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.423603 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.423615 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.423685 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.423696 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.423705 | orchestrator | 2025-06-02 20:06:51.423715 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 20:06:51.423725 | orchestrator | Monday 02 June 2025 19:55:55 +0000 (0:00:00.746) 0:00:06.061 *********** 2025-06-02 20:06:51.423734 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.423744 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.423753 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.423763 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.423772 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.423781 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.423791 | orchestrator | 2025-06-02 20:06:51.423800 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 20:06:51.423810 | orchestrator | Monday 02 June 2025 19:55:56 +0000 (0:00:00.558) 0:00:06.620 *********** 2025-06-02 20:06:51.423820 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.423829 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.423843 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.423859 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.423875 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.423928 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.423944 | orchestrator | 2025-06-02 20:06:51.423960 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 20:06:51.423977 | orchestrator | Monday 02 June 2025 19:55:57 +0000 (0:00:00.835) 0:00:07.455 *********** 2025-06-02 20:06:51.423993 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.424027 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.424084 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.424283 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.424347 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.424357 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.424366 | orchestrator | 2025-06-02 20:06:51.424380 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 20:06:51.424494 | orchestrator | Monday 02 June 2025 19:55:57 +0000 (0:00:00.793) 0:00:08.248 *********** 2025-06-02 20:06:51.424513 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.424530 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.424546 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.424563 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.424579 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.424595 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.424749 | orchestrator | 2025-06-02 20:06:51.424770 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 20:06:51.424787 | orchestrator | Monday 02 June 2025 19:55:58 +0000 (0:00:00.864) 0:00:09.113 *********** 2025-06-02 20:06:51.424804 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:06:51.424822 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:06:51.424842 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:06:51.424860 | orchestrator | 2025-06-02 20:06:51.424994 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 20:06:51.425009 | orchestrator | Monday 02 June 2025 19:55:59 +0000 (0:00:00.662) 0:00:09.775 *********** 2025-06-02 20:06:51.425021 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.425032 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.425043 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.425054 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.425065 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.425077 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.425088 | orchestrator | 2025-06-02 20:06:51.425131 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 20:06:51.425149 | orchestrator | Monday 02 June 2025 19:56:00 +0000 (0:00:01.314) 0:00:11.090 *********** 2025-06-02 20:06:51.425166 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:06:51.425326 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:06:51.425351 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:06:51.425368 | orchestrator | 2025-06-02 20:06:51.425384 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 20:06:51.425402 | orchestrator | Monday 02 June 2025 19:56:03 +0000 (0:00:02.960) 0:00:14.051 *********** 2025-06-02 20:06:51.425419 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:06:51.425436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:06:51.425454 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:06:51.425471 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.425491 | orchestrator | 2025-06-02 20:06:51.425513 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 20:06:51.425532 | orchestrator | Monday 02 June 2025 19:56:04 +0000 (0:00:00.802) 0:00:14.853 *********** 2025-06-02 20:06:51.425566 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.425587 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.425621 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.425638 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.425655 | orchestrator | 2025-06-02 20:06:51.425671 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 20:06:51.425688 | orchestrator | Monday 02 June 2025 19:56:05 +0000 (0:00:01.113) 0:00:15.967 *********** 2025-06-02 20:06:51.425747 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.425771 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.425789 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.425910 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.425934 | orchestrator | 2025-06-02 20:06:51.426102 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 20:06:51.426129 | orchestrator | Monday 02 June 2025 19:56:06 +0000 (0:00:00.470) 0:00:16.438 *********** 2025-06-02 20:06:51.426142 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 19:56:01.410474', 'end': '2025-06-02 19:56:01.677744', 'delta': '0:00:00.267270', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.426175 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 19:56:02.457222', 'end': '2025-06-02 19:56:02.725281', 'delta': '0:00:00.268059', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.426198 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 19:56:03.286899', 'end': '2025-06-02 19:56:03.551013', 'delta': '0:00:00.264114', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.426222 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.426233 | orchestrator | 2025-06-02 20:06:51.426245 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 20:06:51.426257 | orchestrator | Monday 02 June 2025 19:56:06 +0000 (0:00:00.224) 0:00:16.663 *********** 2025-06-02 20:06:51.426268 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.426279 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.426290 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.426301 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.426312 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.426323 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.426334 | orchestrator | 2025-06-02 20:06:51.426346 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 20:06:51.426357 | orchestrator | Monday 02 June 2025 19:56:07 +0000 (0:00:01.323) 0:00:17.986 *********** 2025-06-02 20:06:51.426368 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.426379 | orchestrator | 2025-06-02 20:06:51.426475 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 20:06:51.426489 | orchestrator | Monday 02 June 2025 19:56:08 +0000 (0:00:00.596) 0:00:18.583 *********** 2025-06-02 20:06:51.426500 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.426512 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.426523 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.426607 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.426625 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.426639 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.426656 | orchestrator | 2025-06-02 20:06:51.426763 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 20:06:51.426785 | orchestrator | Monday 02 June 2025 19:56:09 +0000 (0:00:01.180) 0:00:19.764 *********** 2025-06-02 20:06:51.426802 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.426819 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.426835 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.426853 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.426870 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.426919 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.426936 | orchestrator | 2025-06-02 20:06:51.426952 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 20:06:51.426969 | orchestrator | Monday 02 June 2025 19:56:11 +0000 (0:00:01.531) 0:00:21.296 *********** 2025-06-02 20:06:51.426979 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.426989 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.426998 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.427008 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.427017 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.427027 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.427036 | orchestrator | 2025-06-02 20:06:51.427046 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 20:06:51.427056 | orchestrator | Monday 02 June 2025 19:56:12 +0000 (0:00:01.072) 0:00:22.368 *********** 2025-06-02 20:06:51.427065 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.427075 | orchestrator | 2025-06-02 20:06:51.427084 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 20:06:51.427094 | orchestrator | Monday 02 June 2025 19:56:12 +0000 (0:00:00.212) 0:00:22.580 *********** 2025-06-02 20:06:51.427103 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.427125 | orchestrator | 2025-06-02 20:06:51.427134 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 20:06:51.427144 | orchestrator | Monday 02 June 2025 19:56:12 +0000 (0:00:00.309) 0:00:22.890 *********** 2025-06-02 20:06:51.427153 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.427163 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.427172 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.427184 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.427200 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.427217 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.427233 | orchestrator | 2025-06-02 20:06:51.427249 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 20:06:51.427296 | orchestrator | Monday 02 June 2025 19:56:13 +0000 (0:00:00.911) 0:00:23.801 *********** 2025-06-02 20:06:51.427313 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.427328 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.427345 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.427361 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.427377 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.427393 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.427408 | orchestrator | 2025-06-02 20:06:51.427426 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 20:06:51.427442 | orchestrator | Monday 02 June 2025 19:56:14 +0000 (0:00:01.055) 0:00:24.859 *********** 2025-06-02 20:06:51.427459 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.427475 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.427491 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.427505 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.427520 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.427534 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.427548 | orchestrator | 2025-06-02 20:06:51.427562 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 20:06:51.427576 | orchestrator | Monday 02 June 2025 19:56:15 +0000 (0:00:01.270) 0:00:26.129 *********** 2025-06-02 20:06:51.427590 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.427604 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.427620 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.427635 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.427650 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.427676 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.427692 | orchestrator | 2025-06-02 20:06:51.427708 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 20:06:51.427724 | orchestrator | Monday 02 June 2025 19:56:16 +0000 (0:00:00.887) 0:00:27.017 *********** 2025-06-02 20:06:51.427741 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.427757 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.427774 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.427790 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.427804 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.427814 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.427823 | orchestrator | 2025-06-02 20:06:51.427833 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 20:06:51.427842 | orchestrator | Monday 02 June 2025 19:56:17 +0000 (0:00:00.745) 0:00:27.762 *********** 2025-06-02 20:06:51.427852 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.427861 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.427871 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.427948 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.427961 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.427971 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.427981 | orchestrator | 2025-06-02 20:06:51.427990 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 20:06:51.428000 | orchestrator | Monday 02 June 2025 19:56:18 +0000 (0:00:00.752) 0:00:28.515 *********** 2025-06-02 20:06:51.428021 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.428031 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.428043 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.428059 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.428074 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.428090 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.428106 | orchestrator | 2025-06-02 20:06:51.428122 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 20:06:51.428137 | orchestrator | Monday 02 June 2025 19:56:18 +0000 (0:00:00.636) 0:00:29.152 *********** 2025-06-02 20:06:51.428154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64', 'scsi-SQEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part1', 'scsi-SQEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part14', 'scsi-SQEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part15', 'scsi-SQEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part16', 'scsi-SQEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.428368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.428386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7', 'scsi-SQEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.428583 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.428598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.428612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428693 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.428701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e', 'scsi-SQEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part1', 'scsi-SQEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part14', 'scsi-SQEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part15', 'scsi-SQEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part16', 'scsi-SQEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.428724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.428733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93e9f309--356a--50f8--bf6b--26db11b00033-osd--block--93e9f309--356a--50f8--bf6b--26db11b00033', 'dm-uuid-LVM-nq5ePTHYYeiXBqOEzKhSv5x7IpcUjKZPc0XaKOILv5EsZsvk4hPA7okc94KObNQM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01a13ba8--1f69--5051--bec5--e01e7e9b87e5-osd--block--01a13ba8--1f69--5051--bec5--e01e7e9b87e5', 'dm-uuid-LVM-VA9A5JOIOF0zJoCyeskPzSbp7bqOuFcA3Z0dXMzoWiuWDZAX3i6zm9YhOku87Dd4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428762 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.428770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bdb59653--b88e--5628--a878--3ed7677d43f1-osd--block--bdb59653--b88e--5628--a878--3ed7677d43f1', 'dm-uuid-LVM-JnmMTlcXje3zZupdQTnGuJCtXtyKkwfTVXMNK88NfT48uRBqVys3sMrodSxbtxGo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee20b18c--4531--5b6f--acaf--50beaceb257d-osd--block--ee20b18c--4531--5b6f--acaf--50beaceb257d', 'dm-uuid-LVM-pVYrEMzJRmJqf2kAIqHaSSxrfgvkeBNsYFqWIyK50ay2dEik5sDtbhIdmSMNgg5z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.428998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93e9f309--356a--50f8--bf6b--26db11b00033-osd--block--93e9f309--356a--50f8--bf6b--26db11b00033'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fGb4mF-dZsm-xEfi-5vlv-eGmP-tK83-iaztIV', 'scsi-0QEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba', 'scsi-SQEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--01a13ba8--1f69--5051--bec5--e01e7e9b87e5-osd--block--01a13ba8--1f69--5051--bec5--e01e7e9b87e5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I3N9TS-jEbF-egUA-3DLa-bL0J-Gloh-NrjqNb', 'scsi-0QEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40', 'scsi-SQEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f', 'scsi-SQEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part1', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part14', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part15', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part16', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429149 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.429164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bdb59653--b88e--5628--a878--3ed7677d43f1-osd--block--bdb59653--b88e--5628--a878--3ed7677d43f1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MScA2o-wrwR-cxTI-HSN1-CJaA-ZmWO-duVTze', 'scsi-0QEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee', 'scsi-SQEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ee20b18c--4531--5b6f--acaf--50beaceb257d-osd--block--ee20b18c--4531--5b6f--acaf--50beaceb257d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GWz1kk-3I3I-MJR6-xAen-2SHi-BVgj-e8DG44', 'scsi-0QEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b', 'scsi-SQEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db', 'scsi-SQEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--86208513--8fbd--535b--80fd--915c228be133-osd--block--86208513--8fbd--535b--80fd--915c228be133', 'dm-uuid-LVM-AXsfCRSUZ922JqSVuA1OB0lhGcw2SnPS8zh8EFbuCqvp1KxISDIjRi8k1SRymk26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429236 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.429249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed769c7c--5756--52eb--9583--a607cefce370-osd--block--ed769c7c--5756--52eb--9583--a607cefce370', 'dm-uuid-LVM-YzJgxiVmVv1MohBuCR2yiPf0zwqUhauMGgazaSDH5MQUhPgBeb5aSgAB2yMhXtX5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429366 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:06:51.429410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part1', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part14', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part15', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part16', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--86208513--8fbd--535b--80fd--915c228be133-osd--block--86208513--8fbd--535b--80fd--915c228be133'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PwbXIf-nbiY-VZEp-Jwyt-8O2F-GGtW-x9I8wZ', 'scsi-0QEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b', 'scsi-SQEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ed769c7c--5756--52eb--9583--a607cefce370-osd--block--ed769c7c--5756--52eb--9583--a607cefce370'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i42UGR-1M23-JeND-WGwO-3Hx7-Q2xw-qnnSe5', 'scsi-0QEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8', 'scsi-SQEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb', 'scsi-SQEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:06:51.429507 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.429521 | orchestrator | 2025-06-02 20:06:51.429535 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 20:06:51.429544 | orchestrator | Monday 02 June 2025 19:56:20 +0000 (0:00:01.902) 0:00:31.054 *********** 2025-06-02 20:06:51.429552 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429566 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429575 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429583 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429592 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429605 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429622 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429634 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429642 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429651 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429659 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429667 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429694 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64', 'scsi-SQEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part1', 'scsi-SQEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part14', 'scsi-SQEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part15', 'scsi-SQEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part16', 'scsi-SQEMU_QEMU_HARDDISK_5f45be76-170b-43b0-9721-f75aad287b64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429705 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429713 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429729 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429743 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429752 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429765 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7', 'scsi-SQEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_51e325de-b67d-49ea-ab97-c3f76b8e45c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429780 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.429789 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.430164 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430197 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430207 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430215 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430223 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430243 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430293 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430341 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430356 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e', 'scsi-SQEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part1', 'scsi-SQEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part14', 'scsi-SQEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part15', 'scsi-SQEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part16', 'scsi-SQEMU_QEMU_HARDDISK_aba5dfb2-d59a-4774-ab63-5c2c16f9e35e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430376 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430385 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.430452 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93e9f309--356a--50f8--bf6b--26db11b00033-osd--block--93e9f309--356a--50f8--bf6b--26db11b00033', 'dm-uuid-LVM-nq5ePTHYYeiXBqOEzKhSv5x7IpcUjKZPc0XaKOILv5EsZsvk4hPA7okc94KObNQM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430477 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01a13ba8--1f69--5051--bec5--e01e7e9b87e5-osd--block--01a13ba8--1f69--5051--bec5--e01e7e9b87e5', 'dm-uuid-LVM-VA9A5JOIOF0zJoCyeskPzSbp7bqOuFcA3Z0dXMzoWiuWDZAX3i6zm9YhOku87Dd4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430710 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430732 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430746 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430841 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--93e9f309--356a--50f8--bf6b--26db11b00033-osd--block--93e9f309--356a--50f8--bf6b--26db11b00033'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fGb4mF-dZsm-xEfi-5vlv-eGmP-tK83-iaztIV', 'scsi-0QEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba', 'scsi-SQEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430952 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--01a13ba8--1f69--5051--bec5--e01e7e9b87e5-osd--block--01a13ba8--1f69--5051--bec5--e01e7e9b87e5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I3N9TS-jEbF-egUA-3DLa-bL0J-Gloh-NrjqNb', 'scsi-0QEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40', 'scsi-SQEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430971 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f', 'scsi-SQEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430980 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.430989 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.430998 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.431069 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bdb59653--b88e--5628--a878--3ed7677d43f1-osd--block--bdb59653--b88e--5628--a878--3ed7677d43f1', 'dm-uuid-LVM-JnmMTlcXje3zZupdQTnGuJCtXtyKkwfTVXMNK88NfT48uRBqVys3sMrodSxbtxGo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--86208513--8fbd--535b--80fd--915c228be133-osd--block--86208513--8fbd--535b--80fd--915c228be133', 'dm-uuid-LVM-AXsfCRSUZ922JqSVuA1OB0lhGcw2SnPS8zh8EFbuCqvp1KxISDIjRi8k1SRymk26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee20b18c--4531--5b6f--acaf--50beaceb257d-osd--block--ee20b18c--4531--5b6f--acaf--50beaceb257d', 'dm-uuid-LVM-pVYrEMzJRmJqf2kAIqHaSSxrfgvkeBNsYFqWIyK50ay2dEik5sDtbhIdmSMNgg5z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431109 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed769c7c--5756--52eb--9583--a607cefce370-osd--block--ed769c7c--5756--52eb--9583--a607cefce370', 'dm-uuid-LVM-YzJgxiVmVv1MohBuCR2yiPf0zwqUhauMGgazaSDH5MQUhPgBeb5aSgAB2yMhXtX5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431118 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431174 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431185 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431198 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431206 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431221 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431229 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431244 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431327 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431372 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431392 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431405 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431428 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431441 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431455 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431562 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part1', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part14', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part15', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part16', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431590 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--86208513--8fbd--535b--80fd--915c228be133-osd--block--86208513--8fbd--535b--80fd--915c228be133'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PwbXIf-nbiY-VZEp-Jwyt-8O2F-GGtW-x9I8wZ', 'scsi-0QEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b', 'scsi-SQEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431602 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ed769c7c--5756--52eb--9583--a607cefce370-osd--block--ed769c7c--5756--52eb--9583--a607cefce370'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i42UGR-1M23-JeND-WGwO-3Hx7-Q2xw-qnnSe5', 'scsi-0QEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8', 'scsi-SQEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431733 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431755 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb', 'scsi-SQEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431770 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part1', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part14', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part15', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part16', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431822 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431832 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.431843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bdb59653--b88e--5628--a878--3ed7677d43f1-osd--block--bdb59653--b88e--5628--a878--3ed7677d43f1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MScA2o-wrwR-cxTI-HSN1-CJaA-ZmWO-duVTze', 'scsi-0QEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee', 'scsi-SQEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431856 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ee20b18c--4531--5b6f--acaf--50beaceb257d-osd--block--ee20b18c--4531--5b6f--acaf--50beaceb257d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GWz1kk-3I3I-MJR6-xAen-2SHi-BVgj-e8DG44', 'scsi-0QEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b', 'scsi-SQEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431863 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db', 'scsi-SQEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:06:51.431877 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.431904 | orchestrator | 2025-06-02 20:06:51.431912 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 20:06:51.431919 | orchestrator | Monday 02 June 2025 19:56:22 +0000 (0:00:01.883) 0:00:32.938 *********** 2025-06-02 20:06:51.431926 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.431933 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.431940 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.431997 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.432010 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.432041 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.432052 | orchestrator | 2025-06-02 20:06:51.432063 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 20:06:51.432075 | orchestrator | Monday 02 June 2025 19:56:23 +0000 (0:00:01.187) 0:00:34.125 *********** 2025-06-02 20:06:51.432086 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.432098 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.432109 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.432121 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.432132 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.432143 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.432155 | orchestrator | 2025-06-02 20:06:51.432177 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 20:06:51.432188 | orchestrator | Monday 02 June 2025 19:56:24 +0000 (0:00:00.967) 0:00:35.093 *********** 2025-06-02 20:06:51.432195 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.432202 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.432208 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.432215 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.432221 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.432228 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.432234 | orchestrator | 2025-06-02 20:06:51.432241 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 20:06:51.432248 | orchestrator | Monday 02 June 2025 19:56:25 +0000 (0:00:01.173) 0:00:36.266 *********** 2025-06-02 20:06:51.432259 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.432266 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.432273 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.432279 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.432286 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.432293 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.432300 | orchestrator | 2025-06-02 20:06:51.432306 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 20:06:51.432313 | orchestrator | Monday 02 June 2025 19:56:26 +0000 (0:00:00.501) 0:00:36.768 *********** 2025-06-02 20:06:51.432334 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.432341 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.432347 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.432354 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.432360 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.432367 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.432373 | orchestrator | 2025-06-02 20:06:51.432381 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 20:06:51.432392 | orchestrator | Monday 02 June 2025 19:56:27 +0000 (0:00:01.100) 0:00:37.868 *********** 2025-06-02 20:06:51.432402 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.432413 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.432424 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.432436 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.432447 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.432458 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.432470 | orchestrator | 2025-06-02 20:06:51.432477 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 20:06:51.432484 | orchestrator | Monday 02 June 2025 19:56:28 +0000 (0:00:00.929) 0:00:38.798 *********** 2025-06-02 20:06:51.432491 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:06:51.432498 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-02 20:06:51.432504 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-02 20:06:51.432511 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 20:06:51.432517 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-02 20:06:51.432524 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-02 20:06:51.432530 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 20:06:51.432537 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 20:06:51.432543 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-02 20:06:51.432550 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 20:06:51.432556 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-02 20:06:51.432563 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 20:06:51.432569 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 20:06:51.432576 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 20:06:51.432582 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 20:06:51.432594 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 20:06:51.432601 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 20:06:51.432607 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 20:06:51.432614 | orchestrator | 2025-06-02 20:06:51.432621 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 20:06:51.432629 | orchestrator | Monday 02 June 2025 19:56:31 +0000 (0:00:02.801) 0:00:41.600 *********** 2025-06-02 20:06:51.432637 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:06:51.432645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:06:51.432653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:06:51.432661 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.432668 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 20:06:51.432676 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 20:06:51.432683 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 20:06:51.432691 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.432699 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 20:06:51.432710 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 20:06:51.432721 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 20:06:51.432730 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.432783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 20:06:51.432794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 20:06:51.432804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 20:06:51.432814 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.432824 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 20:06:51.432834 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 20:06:51.432844 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 20:06:51.432853 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.432865 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 20:06:51.432875 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 20:06:51.432907 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 20:06:51.432918 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.432928 | orchestrator | 2025-06-02 20:06:51.432938 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 20:06:51.432949 | orchestrator | Monday 02 June 2025 19:56:32 +0000 (0:00:01.628) 0:00:43.228 *********** 2025-06-02 20:06:51.432960 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.432971 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.432983 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.433003 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.433016 | orchestrator | 2025-06-02 20:06:51.433028 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 20:06:51.433041 | orchestrator | Monday 02 June 2025 19:56:34 +0000 (0:00:01.894) 0:00:45.122 *********** 2025-06-02 20:06:51.433051 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.433060 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.433070 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.433080 | orchestrator | 2025-06-02 20:06:51.433091 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 20:06:51.433102 | orchestrator | Monday 02 June 2025 19:56:35 +0000 (0:00:00.347) 0:00:45.470 *********** 2025-06-02 20:06:51.433113 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.433123 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.433169 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.433180 | orchestrator | 2025-06-02 20:06:51.433188 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 20:06:51.433194 | orchestrator | Monday 02 June 2025 19:56:35 +0000 (0:00:00.494) 0:00:45.964 *********** 2025-06-02 20:06:51.433201 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.433208 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.433214 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.433221 | orchestrator | 2025-06-02 20:06:51.433228 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 20:06:51.433234 | orchestrator | Monday 02 June 2025 19:56:35 +0000 (0:00:00.274) 0:00:46.239 *********** 2025-06-02 20:06:51.433241 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.433248 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.433254 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.433261 | orchestrator | 2025-06-02 20:06:51.433267 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 20:06:51.433274 | orchestrator | Monday 02 June 2025 19:56:36 +0000 (0:00:00.355) 0:00:46.595 *********** 2025-06-02 20:06:51.433280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:06:51.433287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:06:51.433294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:06:51.433300 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.433307 | orchestrator | 2025-06-02 20:06:51.433313 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 20:06:51.433320 | orchestrator | Monday 02 June 2025 19:56:36 +0000 (0:00:00.353) 0:00:46.948 *********** 2025-06-02 20:06:51.433326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:06:51.433333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:06:51.433339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:06:51.433346 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.433352 | orchestrator | 2025-06-02 20:06:51.433359 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 20:06:51.433366 | orchestrator | Monday 02 June 2025 19:56:37 +0000 (0:00:00.498) 0:00:47.447 *********** 2025-06-02 20:06:51.433372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:06:51.433379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:06:51.433385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:06:51.433392 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.433398 | orchestrator | 2025-06-02 20:06:51.433405 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 20:06:51.433412 | orchestrator | Monday 02 June 2025 19:56:37 +0000 (0:00:00.526) 0:00:47.974 *********** 2025-06-02 20:06:51.433418 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.433425 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.433431 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.433438 | orchestrator | 2025-06-02 20:06:51.433445 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 20:06:51.433451 | orchestrator | Monday 02 June 2025 19:56:38 +0000 (0:00:00.410) 0:00:48.385 *********** 2025-06-02 20:06:51.433458 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 20:06:51.433465 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 20:06:51.433471 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 20:06:51.433478 | orchestrator | 2025-06-02 20:06:51.433485 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 20:06:51.433491 | orchestrator | Monday 02 June 2025 19:56:38 +0000 (0:00:00.704) 0:00:49.089 *********** 2025-06-02 20:06:51.433537 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:06:51.433545 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:06:51.433559 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:06:51.433566 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 20:06:51.433572 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 20:06:51.433579 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 20:06:51.433585 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 20:06:51.433592 | orchestrator | 2025-06-02 20:06:51.433599 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 20:06:51.433605 | orchestrator | Monday 02 June 2025 19:56:39 +0000 (0:00:00.989) 0:00:50.078 *********** 2025-06-02 20:06:51.433612 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:06:51.433618 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:06:51.433625 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:06:51.433636 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 20:06:51.433642 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 20:06:51.433649 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 20:06:51.433655 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 20:06:51.433662 | orchestrator | 2025-06-02 20:06:51.433669 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:06:51.433675 | orchestrator | Monday 02 June 2025 19:56:42 +0000 (0:00:02.326) 0:00:52.405 *********** 2025-06-02 20:06:51.433682 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.433691 | orchestrator | 2025-06-02 20:06:51.433697 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:06:51.433704 | orchestrator | Monday 02 June 2025 19:56:43 +0000 (0:00:01.067) 0:00:53.472 *********** 2025-06-02 20:06:51.433711 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.433717 | orchestrator | 2025-06-02 20:06:51.433724 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:06:51.433731 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:01.017) 0:00:54.489 *********** 2025-06-02 20:06:51.433737 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.433744 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.433750 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.433757 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.433763 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.433770 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.433777 | orchestrator | 2025-06-02 20:06:51.433783 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:06:51.433790 | orchestrator | Monday 02 June 2025 19:56:44 +0000 (0:00:00.673) 0:00:55.163 *********** 2025-06-02 20:06:51.433797 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.433803 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.433810 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.433816 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.433823 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.433830 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.433836 | orchestrator | 2025-06-02 20:06:51.433843 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:06:51.433849 | orchestrator | Monday 02 June 2025 19:56:46 +0000 (0:00:01.387) 0:00:56.550 *********** 2025-06-02 20:06:51.433860 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.433867 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.433874 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.433932 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.433940 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.433946 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.433953 | orchestrator | 2025-06-02 20:06:51.433960 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:06:51.433966 | orchestrator | Monday 02 June 2025 19:56:47 +0000 (0:00:01.305) 0:00:57.856 *********** 2025-06-02 20:06:51.433973 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.433979 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.433986 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.433992 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.433999 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.434005 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.434012 | orchestrator | 2025-06-02 20:06:51.434078 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:06:51.434089 | orchestrator | Monday 02 June 2025 19:56:48 +0000 (0:00:01.223) 0:00:59.079 *********** 2025-06-02 20:06:51.434101 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.434111 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.434121 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.434132 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.434143 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.434155 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.434166 | orchestrator | 2025-06-02 20:06:51.434178 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:06:51.434190 | orchestrator | Monday 02 June 2025 19:56:49 +0000 (0:00:01.047) 0:01:00.127 *********** 2025-06-02 20:06:51.434242 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.434254 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.434264 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.434271 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.434292 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.434298 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.434304 | orchestrator | 2025-06-02 20:06:51.434310 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:06:51.434317 | orchestrator | Monday 02 June 2025 19:56:50 +0000 (0:00:00.644) 0:01:00.771 *********** 2025-06-02 20:06:51.434331 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.434337 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.434343 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.434353 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.434363 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.434373 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.434391 | orchestrator | 2025-06-02 20:06:51.434403 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:06:51.434413 | orchestrator | Monday 02 June 2025 19:56:51 +0000 (0:00:00.755) 0:01:01.527 *********** 2025-06-02 20:06:51.434424 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.434435 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.434446 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.434458 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.434469 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.434477 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.434483 | orchestrator | 2025-06-02 20:06:51.434489 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:06:51.434502 | orchestrator | Monday 02 June 2025 19:56:52 +0000 (0:00:01.177) 0:01:02.704 *********** 2025-06-02 20:06:51.434509 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.434515 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.434521 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.434527 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.434533 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.434547 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.434553 | orchestrator | 2025-06-02 20:06:51.434559 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:06:51.434565 | orchestrator | Monday 02 June 2025 19:56:53 +0000 (0:00:01.445) 0:01:04.149 *********** 2025-06-02 20:06:51.434571 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.434577 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.434583 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.434589 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.434595 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.434601 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.434607 | orchestrator | 2025-06-02 20:06:51.434613 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:06:51.434619 | orchestrator | Monday 02 June 2025 19:56:54 +0000 (0:00:00.666) 0:01:04.816 *********** 2025-06-02 20:06:51.434626 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.434632 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.434638 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.434644 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.434650 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.434656 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.434662 | orchestrator | 2025-06-02 20:06:51.434668 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:06:51.434674 | orchestrator | Monday 02 June 2025 19:56:55 +0000 (0:00:00.903) 0:01:05.719 *********** 2025-06-02 20:06:51.434680 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.434686 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.434692 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.434698 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.434704 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.434710 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.434716 | orchestrator | 2025-06-02 20:06:51.434722 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:06:51.434729 | orchestrator | Monday 02 June 2025 19:56:56 +0000 (0:00:00.644) 0:01:06.363 *********** 2025-06-02 20:06:51.434735 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.434741 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.434747 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.434753 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.434759 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.434765 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.434771 | orchestrator | 2025-06-02 20:06:51.434777 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:06:51.434783 | orchestrator | Monday 02 June 2025 19:56:57 +0000 (0:00:00.952) 0:01:07.316 *********** 2025-06-02 20:06:51.434789 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.434795 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.434801 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.434807 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.434813 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.434819 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.434825 | orchestrator | 2025-06-02 20:06:51.434831 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:06:51.434837 | orchestrator | Monday 02 June 2025 19:56:57 +0000 (0:00:00.605) 0:01:07.921 *********** 2025-06-02 20:06:51.434843 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.434849 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.434855 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.434861 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.434867 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.434873 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.434982 | orchestrator | 2025-06-02 20:06:51.435018 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:06:51.435028 | orchestrator | Monday 02 June 2025 19:56:58 +0000 (0:00:00.772) 0:01:08.694 *********** 2025-06-02 20:06:51.435043 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.435049 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.435055 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.435061 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.435067 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.435074 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.435080 | orchestrator | 2025-06-02 20:06:51.435086 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:06:51.435142 | orchestrator | Monday 02 June 2025 19:56:58 +0000 (0:00:00.540) 0:01:09.235 *********** 2025-06-02 20:06:51.435149 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.435156 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.435162 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.435168 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.435174 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.435180 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.435187 | orchestrator | 2025-06-02 20:06:51.435193 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:06:51.435199 | orchestrator | Monday 02 June 2025 19:56:59 +0000 (0:00:00.743) 0:01:09.979 *********** 2025-06-02 20:06:51.435205 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.435211 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.435218 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.435224 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.435230 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.435236 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.435242 | orchestrator | 2025-06-02 20:06:51.435248 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:06:51.435255 | orchestrator | Monday 02 June 2025 19:57:00 +0000 (0:00:00.589) 0:01:10.568 *********** 2025-06-02 20:06:51.435261 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.435267 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.435273 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.435279 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.435285 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.435291 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.435297 | orchestrator | 2025-06-02 20:06:51.435313 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-02 20:06:51.435319 | orchestrator | Monday 02 June 2025 19:57:01 +0000 (0:00:01.174) 0:01:11.742 *********** 2025-06-02 20:06:51.435326 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.435349 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.435356 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.435362 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.435368 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.435374 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.435380 | orchestrator | 2025-06-02 20:06:51.435386 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-02 20:06:51.435393 | orchestrator | Monday 02 June 2025 19:57:03 +0000 (0:00:01.863) 0:01:13.606 *********** 2025-06-02 20:06:51.435399 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.435405 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.435411 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.435417 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.435423 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.435429 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.435435 | orchestrator | 2025-06-02 20:06:51.435442 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-02 20:06:51.435448 | orchestrator | Monday 02 June 2025 19:57:05 +0000 (0:00:01.938) 0:01:15.545 *********** 2025-06-02 20:06:51.435455 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.435468 | orchestrator | 2025-06-02 20:06:51.435474 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-02 20:06:51.435480 | orchestrator | Monday 02 June 2025 19:57:06 +0000 (0:00:01.184) 0:01:16.729 *********** 2025-06-02 20:06:51.435486 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.435492 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.435499 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.435505 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.435511 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.435517 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.435523 | orchestrator | 2025-06-02 20:06:51.435529 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-02 20:06:51.435536 | orchestrator | Monday 02 June 2025 19:57:07 +0000 (0:00:00.806) 0:01:17.536 *********** 2025-06-02 20:06:51.435542 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.435548 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.435554 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.435560 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.435566 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.435572 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.435579 | orchestrator | 2025-06-02 20:06:51.435585 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-02 20:06:51.435591 | orchestrator | Monday 02 June 2025 19:57:07 +0000 (0:00:00.581) 0:01:18.117 *********** 2025-06-02 20:06:51.435597 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:06:51.435604 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:06:51.435610 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:06:51.435617 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:06:51.435628 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:06:51.435637 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:06:51.435643 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:06:51.435649 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 20:06:51.435655 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:06:51.435661 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:06:51.435668 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:06:51.435674 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 20:06:51.435680 | orchestrator | 2025-06-02 20:06:51.435714 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-02 20:06:51.435721 | orchestrator | Monday 02 June 2025 19:57:09 +0000 (0:00:01.564) 0:01:19.681 *********** 2025-06-02 20:06:51.435728 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.435734 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.435740 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.435746 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.435752 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.435759 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.435765 | orchestrator | 2025-06-02 20:06:51.435771 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-02 20:06:51.435777 | orchestrator | Monday 02 June 2025 19:57:10 +0000 (0:00:00.955) 0:01:20.636 *********** 2025-06-02 20:06:51.435783 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.435789 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.435795 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.435801 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.435813 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.435820 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.435826 | orchestrator | 2025-06-02 20:06:51.435832 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-02 20:06:51.435838 | orchestrator | Monday 02 June 2025 19:57:11 +0000 (0:00:00.985) 0:01:21.622 *********** 2025-06-02 20:06:51.435847 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.435857 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.435863 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.435873 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.435901 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.435912 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.435923 | orchestrator | 2025-06-02 20:06:51.435930 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-02 20:06:51.435936 | orchestrator | Monday 02 June 2025 19:57:11 +0000 (0:00:00.576) 0:01:22.198 *********** 2025-06-02 20:06:51.435942 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.435948 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.435954 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.435960 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.435966 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.435972 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.435978 | orchestrator | 2025-06-02 20:06:51.435984 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-02 20:06:51.435990 | orchestrator | Monday 02 June 2025 19:57:12 +0000 (0:00:00.803) 0:01:23.002 *********** 2025-06-02 20:06:51.435997 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.436003 | orchestrator | 2025-06-02 20:06:51.436009 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-02 20:06:51.436015 | orchestrator | Monday 02 June 2025 19:57:13 +0000 (0:00:01.128) 0:01:24.131 *********** 2025-06-02 20:06:51.436022 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.436033 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.436043 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.436052 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.436061 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.436071 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.436081 | orchestrator | 2025-06-02 20:06:51.436091 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-02 20:06:51.436102 | orchestrator | Monday 02 June 2025 19:58:14 +0000 (0:01:00.829) 0:02:24.960 *********** 2025-06-02 20:06:51.436110 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:06:51.436116 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:06:51.436122 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:06:51.436128 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.436134 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:06:51.436140 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:06:51.436146 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:06:51.436152 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.436158 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:06:51.436164 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:06:51.436170 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:06:51.436176 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.436183 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:06:51.436198 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:06:51.436204 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:06:51.436211 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.436217 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:06:51.436223 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:06:51.436229 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:06:51.436235 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.436241 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 20:06:51.436247 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 20:06:51.436253 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 20:06:51.436287 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.436294 | orchestrator | 2025-06-02 20:06:51.436300 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-02 20:06:51.436307 | orchestrator | Monday 02 June 2025 19:58:15 +0000 (0:00:00.829) 0:02:25.789 *********** 2025-06-02 20:06:51.436313 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.436319 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.436325 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.436331 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.436337 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.436343 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.436349 | orchestrator | 2025-06-02 20:06:51.436355 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-02 20:06:51.436362 | orchestrator | Monday 02 June 2025 19:58:16 +0000 (0:00:00.571) 0:02:26.361 *********** 2025-06-02 20:06:51.436368 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.436374 | orchestrator | 2025-06-02 20:06:51.436380 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-02 20:06:51.436386 | orchestrator | Monday 02 June 2025 19:58:16 +0000 (0:00:00.166) 0:02:26.527 *********** 2025-06-02 20:06:51.436392 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.436398 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.436404 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.436410 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.436416 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.436422 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.436428 | orchestrator | 2025-06-02 20:06:51.436439 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-02 20:06:51.436445 | orchestrator | Monday 02 June 2025 19:58:17 +0000 (0:00:00.781) 0:02:27.309 *********** 2025-06-02 20:06:51.436451 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.436457 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.436463 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.436469 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.436475 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.436481 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.436487 | orchestrator | 2025-06-02 20:06:51.436493 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-02 20:06:51.436499 | orchestrator | Monday 02 June 2025 19:58:17 +0000 (0:00:00.548) 0:02:27.858 *********** 2025-06-02 20:06:51.436505 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.436512 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.436518 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.436524 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.436530 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.436536 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.436542 | orchestrator | 2025-06-02 20:06:51.436594 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-02 20:06:51.436600 | orchestrator | Monday 02 June 2025 19:58:18 +0000 (0:00:00.713) 0:02:28.572 *********** 2025-06-02 20:06:51.436607 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.436613 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.436619 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.436625 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.436631 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.436637 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.436643 | orchestrator | 2025-06-02 20:06:51.436649 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-02 20:06:51.436656 | orchestrator | Monday 02 June 2025 19:58:20 +0000 (0:00:01.998) 0:02:30.570 *********** 2025-06-02 20:06:51.436662 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.436668 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.436674 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.436680 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.436686 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.436692 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.436698 | orchestrator | 2025-06-02 20:06:51.436704 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-02 20:06:51.436710 | orchestrator | Monday 02 June 2025 19:58:20 +0000 (0:00:00.678) 0:02:31.248 *********** 2025-06-02 20:06:51.436717 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.436725 | orchestrator | 2025-06-02 20:06:51.436731 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-02 20:06:51.436737 | orchestrator | Monday 02 June 2025 19:58:21 +0000 (0:00:00.979) 0:02:32.227 *********** 2025-06-02 20:06:51.436743 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.436749 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.436755 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.436761 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.436767 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.436773 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.436779 | orchestrator | 2025-06-02 20:06:51.436785 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-02 20:06:51.436791 | orchestrator | Monday 02 June 2025 19:58:22 +0000 (0:00:00.552) 0:02:32.780 *********** 2025-06-02 20:06:51.436797 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.436803 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.436809 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.436816 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.436821 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.436828 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.436834 | orchestrator | 2025-06-02 20:06:51.436840 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-02 20:06:51.436846 | orchestrator | Monday 02 June 2025 19:58:23 +0000 (0:00:00.725) 0:02:33.505 *********** 2025-06-02 20:06:51.436852 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.436858 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.436864 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.436870 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.436876 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.436908 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.436915 | orchestrator | 2025-06-02 20:06:51.436921 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-02 20:06:51.436954 | orchestrator | Monday 02 June 2025 19:58:23 +0000 (0:00:00.607) 0:02:34.113 *********** 2025-06-02 20:06:51.436961 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.436968 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.436974 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.436980 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.436991 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.436997 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.437003 | orchestrator | 2025-06-02 20:06:51.437009 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-02 20:06:51.437015 | orchestrator | Monday 02 June 2025 19:58:24 +0000 (0:00:00.769) 0:02:34.882 *********** 2025-06-02 20:06:51.437021 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.437027 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.437033 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.437039 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.437045 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.437051 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.437057 | orchestrator | 2025-06-02 20:06:51.437063 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-02 20:06:51.437070 | orchestrator | Monday 02 June 2025 19:58:25 +0000 (0:00:00.565) 0:02:35.447 *********** 2025-06-02 20:06:51.437076 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.437082 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.437088 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.437094 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.437104 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.437110 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.437116 | orchestrator | 2025-06-02 20:06:51.437122 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-02 20:06:51.437128 | orchestrator | Monday 02 June 2025 19:58:26 +0000 (0:00:00.974) 0:02:36.422 *********** 2025-06-02 20:06:51.437135 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.437141 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.437147 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.437153 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.437159 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.437165 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.437171 | orchestrator | 2025-06-02 20:06:51.437177 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-02 20:06:51.437183 | orchestrator | Monday 02 June 2025 19:58:26 +0000 (0:00:00.693) 0:02:37.116 *********** 2025-06-02 20:06:51.437189 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.437195 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.437201 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.437207 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.437214 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.437219 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.437225 | orchestrator | 2025-06-02 20:06:51.437232 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-02 20:06:51.437238 | orchestrator | Monday 02 June 2025 19:58:27 +0000 (0:00:00.812) 0:02:37.928 *********** 2025-06-02 20:06:51.437244 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.437250 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.437256 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.437262 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.437268 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.437274 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.437280 | orchestrator | 2025-06-02 20:06:51.437286 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-02 20:06:51.437293 | orchestrator | Monday 02 June 2025 19:58:28 +0000 (0:00:01.304) 0:02:39.232 *********** 2025-06-02 20:06:51.437299 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.437305 | orchestrator | 2025-06-02 20:06:51.437311 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-02 20:06:51.437317 | orchestrator | Monday 02 June 2025 19:58:30 +0000 (0:00:01.128) 0:02:40.360 *********** 2025-06-02 20:06:51.437328 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-02 20:06:51.437335 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-02 20:06:51.437341 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-02 20:06:51.437347 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-02 20:06:51.437353 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-02 20:06:51.437360 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-02 20:06:51.437366 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-02 20:06:51.437372 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-02 20:06:51.437378 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-02 20:06:51.437384 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-02 20:06:51.437390 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-02 20:06:51.437396 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-02 20:06:51.437402 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-02 20:06:51.437408 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-02 20:06:51.437414 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-02 20:06:51.437420 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-02 20:06:51.437426 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-02 20:06:51.437432 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-02 20:06:51.437438 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-02 20:06:51.437444 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-02 20:06:51.437450 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-02 20:06:51.437478 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-02 20:06:51.437489 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-02 20:06:51.437498 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-02 20:06:51.437507 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-02 20:06:51.437516 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-02 20:06:51.437526 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-02 20:06:51.437536 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-02 20:06:51.437545 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-02 20:06:51.437554 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-02 20:06:51.437564 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-02 20:06:51.437574 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-02 20:06:51.437584 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-02 20:06:51.437594 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-02 20:06:51.437605 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-02 20:06:51.437614 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-02 20:06:51.437626 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-02 20:06:51.437638 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-02 20:06:51.437644 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-02 20:06:51.437650 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-02 20:06:51.437656 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-02 20:06:51.437662 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-02 20:06:51.437668 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:06:51.437674 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:06:51.437686 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:06:51.437692 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:06:51.437698 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:06:51.437704 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-02 20:06:51.437710 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:06:51.437716 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:06:51.437723 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:06:51.437729 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:06:51.437735 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:06:51.437741 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 20:06:51.437747 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:06:51.437753 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:06:51.437759 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:06:51.437765 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:06:51.437771 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:06:51.437777 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 20:06:51.437783 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:06:51.437789 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:06:51.437795 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:06:51.437801 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:06:51.437807 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:06:51.437813 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 20:06:51.437819 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:06:51.437825 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:06:51.437831 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:06:51.437837 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:06:51.437843 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:06:51.437849 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 20:06:51.437855 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:06:51.437861 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:06:51.437867 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:06:51.437874 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:06:51.437899 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:06:51.437906 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 20:06:51.437912 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:06:51.437948 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:06:51.437955 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:06:51.437961 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:06:51.437967 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:06:51.437973 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 20:06:51.437984 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-02 20:06:51.437990 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-02 20:06:51.437996 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-02 20:06:51.438002 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-02 20:06:51.438008 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-02 20:06:51.438046 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-02 20:06:51.438054 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-02 20:06:51.438060 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-02 20:06:51.438066 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-02 20:06:51.438072 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-02 20:06:51.438082 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-02 20:06:51.438088 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-02 20:06:51.438094 | orchestrator | 2025-06-02 20:06:51.438100 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-02 20:06:51.438107 | orchestrator | Monday 02 June 2025 19:58:36 +0000 (0:00:06.182) 0:02:46.543 *********** 2025-06-02 20:06:51.438113 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438119 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438125 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438132 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.438138 | orchestrator | 2025-06-02 20:06:51.438144 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-02 20:06:51.438150 | orchestrator | Monday 02 June 2025 19:58:37 +0000 (0:00:00.887) 0:02:47.431 *********** 2025-06-02 20:06:51.438156 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.438163 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.438169 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.438175 | orchestrator | 2025-06-02 20:06:51.438181 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-02 20:06:51.438187 | orchestrator | Monday 02 June 2025 19:58:37 +0000 (0:00:00.729) 0:02:48.160 *********** 2025-06-02 20:06:51.438193 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.438199 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.438206 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.438212 | orchestrator | 2025-06-02 20:06:51.438218 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-02 20:06:51.438224 | orchestrator | Monday 02 June 2025 19:58:39 +0000 (0:00:01.678) 0:02:49.839 *********** 2025-06-02 20:06:51.438230 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438236 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438242 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438248 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.438254 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.438260 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.438266 | orchestrator | 2025-06-02 20:06:51.438273 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-02 20:06:51.438283 | orchestrator | Monday 02 June 2025 19:58:40 +0000 (0:00:00.618) 0:02:50.457 *********** 2025-06-02 20:06:51.438299 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438308 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438318 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438329 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.438340 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.438350 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.438361 | orchestrator | 2025-06-02 20:06:51.438367 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-02 20:06:51.438373 | orchestrator | Monday 02 June 2025 19:58:40 +0000 (0:00:00.764) 0:02:51.222 *********** 2025-06-02 20:06:51.438379 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438385 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438391 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438397 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.438403 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.438409 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.438415 | orchestrator | 2025-06-02 20:06:51.438421 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-02 20:06:51.438427 | orchestrator | Monday 02 June 2025 19:58:41 +0000 (0:00:00.554) 0:02:51.776 *********** 2025-06-02 20:06:51.438433 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438439 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438481 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438493 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.438503 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.438514 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.438520 | orchestrator | 2025-06-02 20:06:51.438526 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-02 20:06:51.438532 | orchestrator | Monday 02 June 2025 19:58:42 +0000 (0:00:00.609) 0:02:52.386 *********** 2025-06-02 20:06:51.438538 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438544 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438550 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438556 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.438562 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.438568 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.438574 | orchestrator | 2025-06-02 20:06:51.438580 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-02 20:06:51.438586 | orchestrator | Monday 02 June 2025 19:58:42 +0000 (0:00:00.502) 0:02:52.888 *********** 2025-06-02 20:06:51.438592 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438598 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438604 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438610 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.438616 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.438622 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.438628 | orchestrator | 2025-06-02 20:06:51.438642 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-02 20:06:51.438648 | orchestrator | Monday 02 June 2025 19:58:43 +0000 (0:00:00.716) 0:02:53.604 *********** 2025-06-02 20:06:51.438654 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438660 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438666 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438672 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.438678 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.438684 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.438690 | orchestrator | 2025-06-02 20:06:51.438697 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-02 20:06:51.438703 | orchestrator | Monday 02 June 2025 19:58:43 +0000 (0:00:00.551) 0:02:54.156 *********** 2025-06-02 20:06:51.438709 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438720 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438726 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438732 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.438738 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.438744 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.438750 | orchestrator | 2025-06-02 20:06:51.438756 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-02 20:06:51.438762 | orchestrator | Monday 02 June 2025 19:58:44 +0000 (0:00:00.777) 0:02:54.933 *********** 2025-06-02 20:06:51.438769 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438775 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438781 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438787 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.438793 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.438799 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.438805 | orchestrator | 2025-06-02 20:06:51.438811 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-02 20:06:51.438817 | orchestrator | Monday 02 June 2025 19:58:48 +0000 (0:00:03.368) 0:02:58.301 *********** 2025-06-02 20:06:51.438823 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438829 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438835 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438841 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.438847 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.438853 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.438859 | orchestrator | 2025-06-02 20:06:51.438865 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-02 20:06:51.438871 | orchestrator | Monday 02 June 2025 19:58:48 +0000 (0:00:00.783) 0:02:59.085 *********** 2025-06-02 20:06:51.438877 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438928 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438934 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438940 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.438946 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.438952 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.438958 | orchestrator | 2025-06-02 20:06:51.438964 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-02 20:06:51.438970 | orchestrator | Monday 02 June 2025 19:58:49 +0000 (0:00:00.558) 0:02:59.644 *********** 2025-06-02 20:06:51.438977 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.438983 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.438989 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.438995 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.439001 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.439007 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.439013 | orchestrator | 2025-06-02 20:06:51.439019 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-02 20:06:51.439025 | orchestrator | Monday 02 June 2025 19:58:50 +0000 (0:00:00.746) 0:03:00.390 *********** 2025-06-02 20:06:51.439031 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439037 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.439043 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.439049 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.439055 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.439061 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.439068 | orchestrator | 2025-06-02 20:06:51.439074 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-02 20:06:51.439107 | orchestrator | Monday 02 June 2025 19:58:50 +0000 (0:00:00.757) 0:03:01.148 *********** 2025-06-02 20:06:51.439121 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439127 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.439133 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.439142 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-02 20:06:51.439150 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-02 20:06:51.439158 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.439168 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-02 20:06:51.439175 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-02 20:06:51.439181 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.439187 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-02 20:06:51.439196 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-02 20:06:51.439207 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.439219 | orchestrator | 2025-06-02 20:06:51.439231 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-02 20:06:51.439242 | orchestrator | Monday 02 June 2025 19:58:51 +0000 (0:00:00.868) 0:03:02.016 *********** 2025-06-02 20:06:51.439253 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439263 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.439269 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.439275 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.439281 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.439287 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.439293 | orchestrator | 2025-06-02 20:06:51.439299 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-02 20:06:51.439305 | orchestrator | Monday 02 June 2025 19:58:52 +0000 (0:00:00.683) 0:03:02.700 *********** 2025-06-02 20:06:51.439311 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439317 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.439323 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.439329 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.439335 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.439341 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.439347 | orchestrator | 2025-06-02 20:06:51.439353 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 20:06:51.439359 | orchestrator | Monday 02 June 2025 19:58:53 +0000 (0:00:00.701) 0:03:03.401 *********** 2025-06-02 20:06:51.439370 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439376 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.439382 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.439388 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.439394 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.439400 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.439406 | orchestrator | 2025-06-02 20:06:51.439412 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 20:06:51.439419 | orchestrator | Monday 02 June 2025 19:58:53 +0000 (0:00:00.624) 0:03:04.026 *********** 2025-06-02 20:06:51.439425 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439431 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.439437 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.439443 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.439449 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.439455 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.439461 | orchestrator | 2025-06-02 20:06:51.439467 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 20:06:51.439473 | orchestrator | Monday 02 June 2025 19:58:54 +0000 (0:00:00.845) 0:03:04.872 *********** 2025-06-02 20:06:51.439479 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439485 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.439491 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.439520 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.439527 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.439534 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.439540 | orchestrator | 2025-06-02 20:06:51.439549 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 20:06:51.439560 | orchestrator | Monday 02 June 2025 19:58:55 +0000 (0:00:00.657) 0:03:05.529 *********** 2025-06-02 20:06:51.439571 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439581 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.439591 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.439601 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.439613 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.439624 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.439635 | orchestrator | 2025-06-02 20:06:51.439646 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 20:06:51.439653 | orchestrator | Monday 02 June 2025 19:58:56 +0000 (0:00:00.925) 0:03:06.454 *********** 2025-06-02 20:06:51.439659 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 20:06:51.439665 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 20:06:51.439671 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 20:06:51.439677 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439683 | orchestrator | 2025-06-02 20:06:51.439689 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 20:06:51.439695 | orchestrator | Monday 02 June 2025 19:58:56 +0000 (0:00:00.398) 0:03:06.853 *********** 2025-06-02 20:06:51.439707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 20:06:51.439714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 20:06:51.439720 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 20:06:51.439726 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439732 | orchestrator | 2025-06-02 20:06:51.439738 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 20:06:51.439744 | orchestrator | Monday 02 June 2025 19:58:56 +0000 (0:00:00.418) 0:03:07.271 *********** 2025-06-02 20:06:51.439750 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 20:06:51.439756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 20:06:51.439762 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 20:06:51.439768 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439780 | orchestrator | 2025-06-02 20:06:51.439786 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 20:06:51.439792 | orchestrator | Monday 02 June 2025 19:58:57 +0000 (0:00:00.359) 0:03:07.630 *********** 2025-06-02 20:06:51.439798 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439804 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.439810 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.439816 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.439822 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.439828 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.439834 | orchestrator | 2025-06-02 20:06:51.439840 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 20:06:51.439846 | orchestrator | Monday 02 June 2025 19:58:57 +0000 (0:00:00.562) 0:03:08.193 *********** 2025-06-02 20:06:51.439852 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-02 20:06:51.439858 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.439864 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-02 20:06:51.439870 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.439876 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-02 20:06:51.439903 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.439910 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 20:06:51.439916 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 20:06:51.439922 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 20:06:51.439928 | orchestrator | 2025-06-02 20:06:51.439934 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-02 20:06:51.439940 | orchestrator | Monday 02 June 2025 19:58:59 +0000 (0:00:01.645) 0:03:09.838 *********** 2025-06-02 20:06:51.439946 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.439953 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.439959 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.439965 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.439970 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.439977 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.439983 | orchestrator | 2025-06-02 20:06:51.439989 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:06:51.439995 | orchestrator | Monday 02 June 2025 19:59:02 +0000 (0:00:02.469) 0:03:12.308 *********** 2025-06-02 20:06:51.440001 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.440007 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.440013 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.440019 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.440025 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.440031 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.440037 | orchestrator | 2025-06-02 20:06:51.440044 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 20:06:51.440050 | orchestrator | Monday 02 June 2025 19:59:03 +0000 (0:00:00.996) 0:03:13.305 *********** 2025-06-02 20:06:51.440056 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440062 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.440068 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.440074 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.440081 | orchestrator | 2025-06-02 20:06:51.440087 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 20:06:51.440093 | orchestrator | Monday 02 June 2025 19:59:03 +0000 (0:00:00.920) 0:03:14.225 *********** 2025-06-02 20:06:51.440099 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.440105 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.440111 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.440117 | orchestrator | 2025-06-02 20:06:51.440124 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 20:06:51.440156 | orchestrator | Monday 02 June 2025 19:59:04 +0000 (0:00:00.341) 0:03:14.567 *********** 2025-06-02 20:06:51.440168 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.440175 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.440181 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.440187 | orchestrator | 2025-06-02 20:06:51.440193 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 20:06:51.440199 | orchestrator | Monday 02 June 2025 19:59:05 +0000 (0:00:01.594) 0:03:16.162 *********** 2025-06-02 20:06:51.440205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:06:51.440211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:06:51.440217 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:06:51.440223 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.440229 | orchestrator | 2025-06-02 20:06:51.440235 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 20:06:51.440241 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:00.533) 0:03:16.695 *********** 2025-06-02 20:06:51.440252 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.440263 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.440273 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.440284 | orchestrator | 2025-06-02 20:06:51.440294 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 20:06:51.440305 | orchestrator | Monday 02 June 2025 19:59:06 +0000 (0:00:00.301) 0:03:16.996 *********** 2025-06-02 20:06:51.440315 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.440332 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.440343 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.440353 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.440364 | orchestrator | 2025-06-02 20:06:51.440374 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 20:06:51.440385 | orchestrator | Monday 02 June 2025 19:59:07 +0000 (0:00:00.844) 0:03:17.841 *********** 2025-06-02 20:06:51.440396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:06:51.440407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:06:51.440419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:06:51.440430 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440441 | orchestrator | 2025-06-02 20:06:51.440452 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 20:06:51.440464 | orchestrator | Monday 02 June 2025 19:59:07 +0000 (0:00:00.340) 0:03:18.181 *********** 2025-06-02 20:06:51.440475 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440485 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.440497 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.440503 | orchestrator | 2025-06-02 20:06:51.440509 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 20:06:51.440516 | orchestrator | Monday 02 June 2025 19:59:08 +0000 (0:00:00.293) 0:03:18.474 *********** 2025-06-02 20:06:51.440522 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440528 | orchestrator | 2025-06-02 20:06:51.440534 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 20:06:51.440540 | orchestrator | Monday 02 June 2025 19:59:08 +0000 (0:00:00.201) 0:03:18.676 *********** 2025-06-02 20:06:51.440546 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440552 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.440558 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.440564 | orchestrator | 2025-06-02 20:06:51.440570 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 20:06:51.440576 | orchestrator | Monday 02 June 2025 19:59:08 +0000 (0:00:00.302) 0:03:18.979 *********** 2025-06-02 20:06:51.440582 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440591 | orchestrator | 2025-06-02 20:06:51.440601 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 20:06:51.440621 | orchestrator | Monday 02 June 2025 19:59:08 +0000 (0:00:00.204) 0:03:19.184 *********** 2025-06-02 20:06:51.440631 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440641 | orchestrator | 2025-06-02 20:06:51.440652 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 20:06:51.440658 | orchestrator | Monday 02 June 2025 19:59:09 +0000 (0:00:00.221) 0:03:19.405 *********** 2025-06-02 20:06:51.440664 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440670 | orchestrator | 2025-06-02 20:06:51.440676 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 20:06:51.440682 | orchestrator | Monday 02 June 2025 19:59:09 +0000 (0:00:00.252) 0:03:19.658 *********** 2025-06-02 20:06:51.440688 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440694 | orchestrator | 2025-06-02 20:06:51.440700 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 20:06:51.440707 | orchestrator | Monday 02 June 2025 19:59:09 +0000 (0:00:00.203) 0:03:19.861 *********** 2025-06-02 20:06:51.440713 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440719 | orchestrator | 2025-06-02 20:06:51.440725 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 20:06:51.440731 | orchestrator | Monday 02 June 2025 19:59:09 +0000 (0:00:00.219) 0:03:20.081 *********** 2025-06-02 20:06:51.440737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:06:51.440743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:06:51.440749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:06:51.440755 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440761 | orchestrator | 2025-06-02 20:06:51.440767 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 20:06:51.440773 | orchestrator | Monday 02 June 2025 19:59:10 +0000 (0:00:00.384) 0:03:20.465 *********** 2025-06-02 20:06:51.440779 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440785 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.440791 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.440797 | orchestrator | 2025-06-02 20:06:51.440832 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 20:06:51.440839 | orchestrator | Monday 02 June 2025 19:59:10 +0000 (0:00:00.305) 0:03:20.771 *********** 2025-06-02 20:06:51.440845 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440851 | orchestrator | 2025-06-02 20:06:51.440857 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 20:06:51.440864 | orchestrator | Monday 02 June 2025 19:59:10 +0000 (0:00:00.235) 0:03:21.006 *********** 2025-06-02 20:06:51.440870 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.440876 | orchestrator | 2025-06-02 20:06:51.440929 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 20:06:51.440935 | orchestrator | Monday 02 June 2025 19:59:10 +0000 (0:00:00.209) 0:03:21.216 *********** 2025-06-02 20:06:51.440942 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.440948 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.440954 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.440960 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.440966 | orchestrator | 2025-06-02 20:06:51.440973 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 20:06:51.440979 | orchestrator | Monday 02 June 2025 19:59:11 +0000 (0:00:00.857) 0:03:22.073 *********** 2025-06-02 20:06:51.440985 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.440991 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.440997 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.441003 | orchestrator | 2025-06-02 20:06:51.441014 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 20:06:51.441020 | orchestrator | Monday 02 June 2025 19:59:12 +0000 (0:00:00.262) 0:03:22.336 *********** 2025-06-02 20:06:51.441034 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.441040 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.441047 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.441053 | orchestrator | 2025-06-02 20:06:51.441059 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 20:06:51.441065 | orchestrator | Monday 02 June 2025 19:59:13 +0000 (0:00:01.149) 0:03:23.485 *********** 2025-06-02 20:06:51.441071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:06:51.441077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:06:51.441084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:06:51.441090 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.441096 | orchestrator | 2025-06-02 20:06:51.441102 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 20:06:51.441108 | orchestrator | Monday 02 June 2025 19:59:14 +0000 (0:00:00.900) 0:03:24.386 *********** 2025-06-02 20:06:51.441114 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.441120 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.441126 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.441133 | orchestrator | 2025-06-02 20:06:51.441139 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 20:06:51.441145 | orchestrator | Monday 02 June 2025 19:59:14 +0000 (0:00:00.345) 0:03:24.731 *********** 2025-06-02 20:06:51.441151 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.441157 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.441164 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.441170 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.441176 | orchestrator | 2025-06-02 20:06:51.441182 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 20:06:51.441188 | orchestrator | Monday 02 June 2025 19:59:15 +0000 (0:00:00.991) 0:03:25.723 *********** 2025-06-02 20:06:51.441194 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.441201 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.441207 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.441213 | orchestrator | 2025-06-02 20:06:51.441219 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 20:06:51.441225 | orchestrator | Monday 02 June 2025 19:59:15 +0000 (0:00:00.470) 0:03:26.193 *********** 2025-06-02 20:06:51.441232 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.441238 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.441244 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.441250 | orchestrator | 2025-06-02 20:06:51.441256 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 20:06:51.441262 | orchestrator | Monday 02 June 2025 19:59:17 +0000 (0:00:01.459) 0:03:27.653 *********** 2025-06-02 20:06:51.441268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:06:51.441274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:06:51.441281 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:06:51.441287 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.441293 | orchestrator | 2025-06-02 20:06:51.441299 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 20:06:51.441305 | orchestrator | Monday 02 June 2025 19:59:18 +0000 (0:00:00.859) 0:03:28.512 *********** 2025-06-02 20:06:51.441311 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.441317 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.441323 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.441329 | orchestrator | 2025-06-02 20:06:51.441336 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-02 20:06:51.441342 | orchestrator | Monday 02 June 2025 19:59:18 +0000 (0:00:00.351) 0:03:28.864 *********** 2025-06-02 20:06:51.441348 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.441359 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.441365 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.441371 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.441377 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.441383 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.441389 | orchestrator | 2025-06-02 20:06:51.441395 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 20:06:51.441402 | orchestrator | Monday 02 June 2025 19:59:19 +0000 (0:00:00.932) 0:03:29.796 *********** 2025-06-02 20:06:51.441430 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.441438 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.441444 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.441450 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.441456 | orchestrator | 2025-06-02 20:06:51.441463 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 20:06:51.441469 | orchestrator | Monday 02 June 2025 19:59:20 +0000 (0:00:01.017) 0:03:30.813 *********** 2025-06-02 20:06:51.441475 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.441481 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.441487 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.441493 | orchestrator | 2025-06-02 20:06:51.441499 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 20:06:51.441505 | orchestrator | Monday 02 June 2025 19:59:20 +0000 (0:00:00.338) 0:03:31.152 *********** 2025-06-02 20:06:51.441511 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.441517 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.441523 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.441529 | orchestrator | 2025-06-02 20:06:51.441536 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 20:06:51.441542 | orchestrator | Monday 02 June 2025 19:59:22 +0000 (0:00:01.244) 0:03:32.397 *********** 2025-06-02 20:06:51.441549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:06:51.441563 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:06:51.441574 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:06:51.441585 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.441595 | orchestrator | 2025-06-02 20:06:51.441605 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 20:06:51.441616 | orchestrator | Monday 02 June 2025 19:59:22 +0000 (0:00:00.823) 0:03:33.220 *********** 2025-06-02 20:06:51.441623 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.441629 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.441635 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.441641 | orchestrator | 2025-06-02 20:06:51.441647 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-02 20:06:51.441653 | orchestrator | 2025-06-02 20:06:51.441659 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:06:51.441665 | orchestrator | Monday 02 June 2025 19:59:23 +0000 (0:00:00.841) 0:03:34.061 *********** 2025-06-02 20:06:51.441672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.441678 | orchestrator | 2025-06-02 20:06:51.441684 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:06:51.441690 | orchestrator | Monday 02 June 2025 19:59:24 +0000 (0:00:00.535) 0:03:34.597 *********** 2025-06-02 20:06:51.441696 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.441702 | orchestrator | 2025-06-02 20:06:51.441708 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:06:51.441714 | orchestrator | Monday 02 June 2025 19:59:25 +0000 (0:00:00.790) 0:03:35.387 *********** 2025-06-02 20:06:51.441725 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.441733 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.441743 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.441752 | orchestrator | 2025-06-02 20:06:51.441762 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:06:51.441771 | orchestrator | Monday 02 June 2025 19:59:25 +0000 (0:00:00.745) 0:03:36.132 *********** 2025-06-02 20:06:51.441781 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.441791 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.441801 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.441811 | orchestrator | 2025-06-02 20:06:51.441821 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:06:51.441832 | orchestrator | Monday 02 June 2025 19:59:26 +0000 (0:00:00.309) 0:03:36.442 *********** 2025-06-02 20:06:51.441842 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.441852 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.441863 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.441873 | orchestrator | 2025-06-02 20:06:51.441901 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:06:51.441907 | orchestrator | Monday 02 June 2025 19:59:26 +0000 (0:00:00.406) 0:03:36.849 *********** 2025-06-02 20:06:51.441914 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.441920 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.441926 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.441932 | orchestrator | 2025-06-02 20:06:51.441938 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:06:51.441944 | orchestrator | Monday 02 June 2025 19:59:27 +0000 (0:00:00.653) 0:03:37.503 *********** 2025-06-02 20:06:51.441951 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.441957 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.441963 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.441969 | orchestrator | 2025-06-02 20:06:51.441975 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:06:51.441982 | orchestrator | Monday 02 June 2025 19:59:27 +0000 (0:00:00.749) 0:03:38.252 *********** 2025-06-02 20:06:51.441988 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.441994 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.442000 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.442006 | orchestrator | 2025-06-02 20:06:51.442012 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:06:51.442050 | orchestrator | Monday 02 June 2025 19:59:28 +0000 (0:00:00.329) 0:03:38.582 *********** 2025-06-02 20:06:51.442056 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.442063 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.442069 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.442075 | orchestrator | 2025-06-02 20:06:51.442082 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:06:51.442116 | orchestrator | Monday 02 June 2025 19:59:28 +0000 (0:00:00.314) 0:03:38.896 *********** 2025-06-02 20:06:51.442124 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.442130 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.442136 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.442142 | orchestrator | 2025-06-02 20:06:51.442148 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:06:51.442154 | orchestrator | Monday 02 June 2025 19:59:29 +0000 (0:00:01.037) 0:03:39.933 *********** 2025-06-02 20:06:51.442161 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.442167 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.442173 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.442179 | orchestrator | 2025-06-02 20:06:51.442185 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:06:51.442191 | orchestrator | Monday 02 June 2025 19:59:30 +0000 (0:00:00.687) 0:03:40.621 *********** 2025-06-02 20:06:51.442197 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.442203 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.442220 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.442227 | orchestrator | 2025-06-02 20:06:51.442233 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:06:51.442239 | orchestrator | Monday 02 June 2025 19:59:30 +0000 (0:00:00.315) 0:03:40.937 *********** 2025-06-02 20:06:51.442245 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.442251 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.442257 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.442263 | orchestrator | 2025-06-02 20:06:51.442269 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:06:51.442279 | orchestrator | Monday 02 June 2025 19:59:30 +0000 (0:00:00.305) 0:03:41.242 *********** 2025-06-02 20:06:51.442286 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.442292 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.442298 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.442304 | orchestrator | 2025-06-02 20:06:51.442310 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:06:51.442317 | orchestrator | Monday 02 June 2025 19:59:31 +0000 (0:00:00.573) 0:03:41.816 *********** 2025-06-02 20:06:51.442323 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.442329 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.442334 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.442341 | orchestrator | 2025-06-02 20:06:51.442347 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:06:51.442353 | orchestrator | Monday 02 June 2025 19:59:31 +0000 (0:00:00.298) 0:03:42.114 *********** 2025-06-02 20:06:51.442359 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.442365 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.442371 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.442377 | orchestrator | 2025-06-02 20:06:51.442383 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:06:51.442389 | orchestrator | Monday 02 June 2025 19:59:32 +0000 (0:00:00.323) 0:03:42.438 *********** 2025-06-02 20:06:51.442395 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.442401 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.442407 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.442413 | orchestrator | 2025-06-02 20:06:51.442420 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:06:51.442426 | orchestrator | Monday 02 June 2025 19:59:32 +0000 (0:00:00.303) 0:03:42.741 *********** 2025-06-02 20:06:51.442432 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.442438 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.442444 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.442450 | orchestrator | 2025-06-02 20:06:51.442456 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:06:51.442462 | orchestrator | Monday 02 June 2025 19:59:33 +0000 (0:00:00.597) 0:03:43.338 *********** 2025-06-02 20:06:51.442468 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.442474 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.442480 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.442486 | orchestrator | 2025-06-02 20:06:51.442492 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:06:51.442499 | orchestrator | Monday 02 June 2025 19:59:33 +0000 (0:00:00.394) 0:03:43.733 *********** 2025-06-02 20:06:51.442505 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.442511 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.442517 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.442523 | orchestrator | 2025-06-02 20:06:51.442529 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:06:51.442535 | orchestrator | Monday 02 June 2025 19:59:33 +0000 (0:00:00.406) 0:03:44.140 *********** 2025-06-02 20:06:51.442541 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.442547 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.442553 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.442559 | orchestrator | 2025-06-02 20:06:51.442570 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-02 20:06:51.442576 | orchestrator | Monday 02 June 2025 19:59:34 +0000 (0:00:01.046) 0:03:45.186 *********** 2025-06-02 20:06:51.442582 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.442588 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.442594 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.442600 | orchestrator | 2025-06-02 20:06:51.442606 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-02 20:06:51.442616 | orchestrator | Monday 02 June 2025 19:59:35 +0000 (0:00:00.363) 0:03:45.549 *********** 2025-06-02 20:06:51.442627 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.442638 | orchestrator | 2025-06-02 20:06:51.442648 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-02 20:06:51.442658 | orchestrator | Monday 02 June 2025 19:59:35 +0000 (0:00:00.578) 0:03:46.128 *********** 2025-06-02 20:06:51.442668 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.442677 | orchestrator | 2025-06-02 20:06:51.442689 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-02 20:06:51.442699 | orchestrator | Monday 02 June 2025 19:59:35 +0000 (0:00:00.145) 0:03:46.273 *********** 2025-06-02 20:06:51.442711 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 20:06:51.442718 | orchestrator | 2025-06-02 20:06:51.442752 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-02 20:06:51.442761 | orchestrator | Monday 02 June 2025 19:59:37 +0000 (0:00:01.596) 0:03:47.870 *********** 2025-06-02 20:06:51.442768 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.442775 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.442782 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.442789 | orchestrator | 2025-06-02 20:06:51.442796 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-02 20:06:51.442803 | orchestrator | Monday 02 June 2025 19:59:37 +0000 (0:00:00.356) 0:03:48.227 *********** 2025-06-02 20:06:51.442810 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.442817 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.442824 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.442831 | orchestrator | 2025-06-02 20:06:51.442838 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-02 20:06:51.442845 | orchestrator | Monday 02 June 2025 19:59:38 +0000 (0:00:00.443) 0:03:48.670 *********** 2025-06-02 20:06:51.442852 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.442859 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.442867 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.442874 | orchestrator | 2025-06-02 20:06:51.442897 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-02 20:06:51.442905 | orchestrator | Monday 02 June 2025 19:59:39 +0000 (0:00:01.295) 0:03:49.966 *********** 2025-06-02 20:06:51.442912 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.442919 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.442926 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.442933 | orchestrator | 2025-06-02 20:06:51.442945 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-02 20:06:51.442952 | orchestrator | Monday 02 June 2025 19:59:40 +0000 (0:00:01.150) 0:03:51.116 *********** 2025-06-02 20:06:51.442959 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.442966 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.442973 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.442981 | orchestrator | 2025-06-02 20:06:51.442988 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-02 20:06:51.442995 | orchestrator | Monday 02 June 2025 19:59:41 +0000 (0:00:00.706) 0:03:51.822 *********** 2025-06-02 20:06:51.443002 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.443009 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.443016 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.443029 | orchestrator | 2025-06-02 20:06:51.443036 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-02 20:06:51.443043 | orchestrator | Monday 02 June 2025 19:59:42 +0000 (0:00:00.690) 0:03:52.513 *********** 2025-06-02 20:06:51.443050 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.443057 | orchestrator | 2025-06-02 20:06:51.443064 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-02 20:06:51.443072 | orchestrator | Monday 02 June 2025 19:59:43 +0000 (0:00:01.378) 0:03:53.892 *********** 2025-06-02 20:06:51.443079 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.443086 | orchestrator | 2025-06-02 20:06:51.443093 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-02 20:06:51.443100 | orchestrator | Monday 02 June 2025 19:59:44 +0000 (0:00:00.742) 0:03:54.634 *********** 2025-06-02 20:06:51.443107 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:06:51.443114 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.443121 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.443128 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:06:51.443135 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-02 20:06:51.443143 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:06:51.443150 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:06:51.443157 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-02 20:06:51.443164 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:06:51.443171 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-02 20:06:51.443178 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-02 20:06:51.443185 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-02 20:06:51.443192 | orchestrator | 2025-06-02 20:06:51.443199 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-02 20:06:51.443206 | orchestrator | Monday 02 June 2025 19:59:48 +0000 (0:00:03.889) 0:03:58.523 *********** 2025-06-02 20:06:51.443213 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.443220 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.443227 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.443234 | orchestrator | 2025-06-02 20:06:51.443241 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-02 20:06:51.443248 | orchestrator | Monday 02 June 2025 19:59:49 +0000 (0:00:01.546) 0:04:00.070 *********** 2025-06-02 20:06:51.443255 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.443262 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.443270 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.443277 | orchestrator | 2025-06-02 20:06:51.443284 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-02 20:06:51.443291 | orchestrator | Monday 02 June 2025 19:59:50 +0000 (0:00:00.325) 0:04:00.396 *********** 2025-06-02 20:06:51.443298 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.443305 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.443312 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.443319 | orchestrator | 2025-06-02 20:06:51.443326 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-02 20:06:51.443333 | orchestrator | Monday 02 June 2025 19:59:50 +0000 (0:00:00.377) 0:04:00.774 *********** 2025-06-02 20:06:51.443340 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.443348 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.443355 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.443362 | orchestrator | 2025-06-02 20:06:51.443369 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-02 20:06:51.443397 | orchestrator | Monday 02 June 2025 19:59:52 +0000 (0:00:01.836) 0:04:02.610 *********** 2025-06-02 20:06:51.443405 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.443417 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.443424 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.443432 | orchestrator | 2025-06-02 20:06:51.443439 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-02 20:06:51.443446 | orchestrator | Monday 02 June 2025 19:59:53 +0000 (0:00:01.670) 0:04:04.281 *********** 2025-06-02 20:06:51.443453 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.443460 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.443467 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.443474 | orchestrator | 2025-06-02 20:06:51.443481 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-02 20:06:51.443488 | orchestrator | Monday 02 June 2025 19:59:54 +0000 (0:00:00.311) 0:04:04.593 *********** 2025-06-02 20:06:51.443496 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.443503 | orchestrator | 2025-06-02 20:06:51.443510 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-02 20:06:51.443517 | orchestrator | Monday 02 June 2025 19:59:54 +0000 (0:00:00.520) 0:04:05.113 *********** 2025-06-02 20:06:51.443524 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.443531 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.443538 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.443545 | orchestrator | 2025-06-02 20:06:51.443556 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-02 20:06:51.443563 | orchestrator | Monday 02 June 2025 19:59:55 +0000 (0:00:00.569) 0:04:05.683 *********** 2025-06-02 20:06:51.443571 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.443578 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.443585 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.443592 | orchestrator | 2025-06-02 20:06:51.443599 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-02 20:06:51.443606 | orchestrator | Monday 02 June 2025 19:59:55 +0000 (0:00:00.344) 0:04:06.027 *********** 2025-06-02 20:06:51.443614 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.443621 | orchestrator | 2025-06-02 20:06:51.443628 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-02 20:06:51.443635 | orchestrator | Monday 02 June 2025 19:59:56 +0000 (0:00:00.562) 0:04:06.590 *********** 2025-06-02 20:06:51.443642 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.443649 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.443656 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.443663 | orchestrator | 2025-06-02 20:06:51.443670 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-02 20:06:51.443678 | orchestrator | Monday 02 June 2025 19:59:58 +0000 (0:00:01.787) 0:04:08.378 *********** 2025-06-02 20:06:51.443685 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.443692 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.443699 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.443706 | orchestrator | 2025-06-02 20:06:51.443713 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-02 20:06:51.443720 | orchestrator | Monday 02 June 2025 19:59:59 +0000 (0:00:01.144) 0:04:09.523 *********** 2025-06-02 20:06:51.443728 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.443735 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.443742 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.443749 | orchestrator | 2025-06-02 20:06:51.443756 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-02 20:06:51.443763 | orchestrator | Monday 02 June 2025 20:00:01 +0000 (0:00:01.898) 0:04:11.421 *********** 2025-06-02 20:06:51.443770 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.443777 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.443784 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.443792 | orchestrator | 2025-06-02 20:06:51.443803 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-02 20:06:51.443810 | orchestrator | Monday 02 June 2025 20:00:03 +0000 (0:00:02.097) 0:04:13.519 *********** 2025-06-02 20:06:51.443818 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.443825 | orchestrator | 2025-06-02 20:06:51.443832 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-02 20:06:51.443839 | orchestrator | Monday 02 June 2025 20:00:03 +0000 (0:00:00.757) 0:04:14.276 *********** 2025-06-02 20:06:51.443846 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-02 20:06:51.443853 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.443860 | orchestrator | 2025-06-02 20:06:51.443867 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-02 20:06:51.443875 | orchestrator | Monday 02 June 2025 20:00:26 +0000 (0:00:22.068) 0:04:36.345 *********** 2025-06-02 20:06:51.443925 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.443933 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.443940 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.443947 | orchestrator | 2025-06-02 20:06:51.443954 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-02 20:06:51.443961 | orchestrator | Monday 02 June 2025 20:00:37 +0000 (0:00:11.141) 0:04:47.487 *********** 2025-06-02 20:06:51.443969 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.443976 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.443983 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.443990 | orchestrator | 2025-06-02 20:06:51.443998 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-02 20:06:51.444005 | orchestrator | Monday 02 June 2025 20:00:37 +0000 (0:00:00.501) 0:04:47.988 *********** 2025-06-02 20:06:51.444041 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c9de668255b3f8063bf6096641db34b7f780456'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-02 20:06:51.444051 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c9de668255b3f8063bf6096641db34b7f780456'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-02 20:06:51.444060 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c9de668255b3f8063bf6096641db34b7f780456'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-02 20:06:51.444073 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c9de668255b3f8063bf6096641db34b7f780456'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-02 20:06:51.444081 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c9de668255b3f8063bf6096641db34b7f780456'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-02 20:06:51.444090 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c9de668255b3f8063bf6096641db34b7f780456'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5c9de668255b3f8063bf6096641db34b7f780456'}])  2025-06-02 20:06:51.444105 | orchestrator | 2025-06-02 20:06:51.444113 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:06:51.444120 | orchestrator | Monday 02 June 2025 20:00:52 +0000 (0:00:14.738) 0:05:02.727 *********** 2025-06-02 20:06:51.444127 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.444134 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.444141 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.444148 | orchestrator | 2025-06-02 20:06:51.444155 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 20:06:51.444162 | orchestrator | Monday 02 June 2025 20:00:52 +0000 (0:00:00.471) 0:05:03.199 *********** 2025-06-02 20:06:51.444169 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.444177 | orchestrator | 2025-06-02 20:06:51.444184 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 20:06:51.444191 | orchestrator | Monday 02 June 2025 20:00:53 +0000 (0:00:00.706) 0:05:03.906 *********** 2025-06-02 20:06:51.444198 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.444205 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.444212 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.444219 | orchestrator | 2025-06-02 20:06:51.444226 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 20:06:51.444234 | orchestrator | Monday 02 June 2025 20:00:53 +0000 (0:00:00.323) 0:05:04.229 *********** 2025-06-02 20:06:51.444241 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.444248 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.444255 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.444262 | orchestrator | 2025-06-02 20:06:51.444269 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 20:06:51.444276 | orchestrator | Monday 02 June 2025 20:00:54 +0000 (0:00:00.345) 0:05:04.575 *********** 2025-06-02 20:06:51.444284 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:06:51.444291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:06:51.444298 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:06:51.444305 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.444312 | orchestrator | 2025-06-02 20:06:51.444319 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 20:06:51.444326 | orchestrator | Monday 02 June 2025 20:00:55 +0000 (0:00:00.724) 0:05:05.299 *********** 2025-06-02 20:06:51.444333 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.444340 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.444348 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.444355 | orchestrator | 2025-06-02 20:06:51.444362 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-02 20:06:51.444369 | orchestrator | 2025-06-02 20:06:51.444376 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:06:51.444406 | orchestrator | Monday 02 June 2025 20:00:55 +0000 (0:00:00.780) 0:05:06.080 *********** 2025-06-02 20:06:51.444414 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.444422 | orchestrator | 2025-06-02 20:06:51.444429 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:06:51.444436 | orchestrator | Monday 02 June 2025 20:00:56 +0000 (0:00:00.452) 0:05:06.532 *********** 2025-06-02 20:06:51.444443 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.444456 | orchestrator | 2025-06-02 20:06:51.444463 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:06:51.444469 | orchestrator | Monday 02 June 2025 20:00:56 +0000 (0:00:00.693) 0:05:07.225 *********** 2025-06-02 20:06:51.444476 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.444482 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.444489 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.444495 | orchestrator | 2025-06-02 20:06:51.444502 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:06:51.444508 | orchestrator | Monday 02 June 2025 20:00:57 +0000 (0:00:00.743) 0:05:07.969 *********** 2025-06-02 20:06:51.444515 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.444521 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.444532 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.444538 | orchestrator | 2025-06-02 20:06:51.444545 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:06:51.444552 | orchestrator | Monday 02 June 2025 20:00:57 +0000 (0:00:00.262) 0:05:08.232 *********** 2025-06-02 20:06:51.444558 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.444565 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.444571 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.444578 | orchestrator | 2025-06-02 20:06:51.444584 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:06:51.444591 | orchestrator | Monday 02 June 2025 20:00:58 +0000 (0:00:00.417) 0:05:08.649 *********** 2025-06-02 20:06:51.444597 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.444603 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.444610 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.444616 | orchestrator | 2025-06-02 20:06:51.444623 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:06:51.444630 | orchestrator | Monday 02 June 2025 20:00:58 +0000 (0:00:00.267) 0:05:08.916 *********** 2025-06-02 20:06:51.444636 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.444643 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.444649 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.444656 | orchestrator | 2025-06-02 20:06:51.444662 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:06:51.444669 | orchestrator | Monday 02 June 2025 20:00:59 +0000 (0:00:00.673) 0:05:09.590 *********** 2025-06-02 20:06:51.444675 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.444682 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.444688 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.444695 | orchestrator | 2025-06-02 20:06:51.444701 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:06:51.444708 | orchestrator | Monday 02 June 2025 20:00:59 +0000 (0:00:00.261) 0:05:09.852 *********** 2025-06-02 20:06:51.444714 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.444721 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.444727 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.444734 | orchestrator | 2025-06-02 20:06:51.444740 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:06:51.444747 | orchestrator | Monday 02 June 2025 20:01:00 +0000 (0:00:00.434) 0:05:10.286 *********** 2025-06-02 20:06:51.444753 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.444760 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.444766 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.444773 | orchestrator | 2025-06-02 20:06:51.444779 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:06:51.444786 | orchestrator | Monday 02 June 2025 20:01:00 +0000 (0:00:00.702) 0:05:10.989 *********** 2025-06-02 20:06:51.444792 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.444799 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.444805 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.444811 | orchestrator | 2025-06-02 20:06:51.444818 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:06:51.444829 | orchestrator | Monday 02 June 2025 20:01:01 +0000 (0:00:00.671) 0:05:11.660 *********** 2025-06-02 20:06:51.444836 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.444842 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.444849 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.444855 | orchestrator | 2025-06-02 20:06:51.444862 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:06:51.444869 | orchestrator | Monday 02 June 2025 20:01:01 +0000 (0:00:00.246) 0:05:11.907 *********** 2025-06-02 20:06:51.444875 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.444893 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.444900 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.444907 | orchestrator | 2025-06-02 20:06:51.444913 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:06:51.444920 | orchestrator | Monday 02 June 2025 20:01:02 +0000 (0:00:00.443) 0:05:12.350 *********** 2025-06-02 20:06:51.444926 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.444933 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.444939 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.444946 | orchestrator | 2025-06-02 20:06:51.444952 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:06:51.444959 | orchestrator | Monday 02 June 2025 20:01:02 +0000 (0:00:00.263) 0:05:12.614 *********** 2025-06-02 20:06:51.444965 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.444972 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.444978 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.444985 | orchestrator | 2025-06-02 20:06:51.444992 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:06:51.445018 | orchestrator | Monday 02 June 2025 20:01:02 +0000 (0:00:00.266) 0:05:12.881 *********** 2025-06-02 20:06:51.445026 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.445032 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.445039 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.445046 | orchestrator | 2025-06-02 20:06:51.445052 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:06:51.445059 | orchestrator | Monday 02 June 2025 20:01:02 +0000 (0:00:00.276) 0:05:13.157 *********** 2025-06-02 20:06:51.445065 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.445072 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.445079 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.445085 | orchestrator | 2025-06-02 20:06:51.445092 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:06:51.445098 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:00.434) 0:05:13.592 *********** 2025-06-02 20:06:51.445105 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.445111 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.445118 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.445124 | orchestrator | 2025-06-02 20:06:51.445131 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:06:51.445137 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:00.274) 0:05:13.866 *********** 2025-06-02 20:06:51.445144 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.445151 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.445157 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.445164 | orchestrator | 2025-06-02 20:06:51.445174 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:06:51.445181 | orchestrator | Monday 02 June 2025 20:01:03 +0000 (0:00:00.284) 0:05:14.150 *********** 2025-06-02 20:06:51.445188 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.445194 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.445201 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.445207 | orchestrator | 2025-06-02 20:06:51.445214 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:06:51.445220 | orchestrator | Monday 02 June 2025 20:01:04 +0000 (0:00:00.287) 0:05:14.437 *********** 2025-06-02 20:06:51.445232 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.445239 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.445245 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.445252 | orchestrator | 2025-06-02 20:06:51.445258 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-02 20:06:51.445265 | orchestrator | Monday 02 June 2025 20:01:04 +0000 (0:00:00.642) 0:05:15.080 *********** 2025-06-02 20:06:51.445272 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:06:51.445279 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:06:51.445286 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:06:51.445292 | orchestrator | 2025-06-02 20:06:51.445299 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-02 20:06:51.445305 | orchestrator | Monday 02 June 2025 20:01:05 +0000 (0:00:00.490) 0:05:15.570 *********** 2025-06-02 20:06:51.445312 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.445318 | orchestrator | 2025-06-02 20:06:51.445325 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-02 20:06:51.445331 | orchestrator | Monday 02 June 2025 20:01:05 +0000 (0:00:00.474) 0:05:16.044 *********** 2025-06-02 20:06:51.445338 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.445344 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.445351 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.445358 | orchestrator | 2025-06-02 20:06:51.445364 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-02 20:06:51.445371 | orchestrator | Monday 02 June 2025 20:01:06 +0000 (0:00:00.839) 0:05:16.884 *********** 2025-06-02 20:06:51.445377 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.445384 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.445390 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.445397 | orchestrator | 2025-06-02 20:06:51.445403 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-02 20:06:51.445410 | orchestrator | Monday 02 June 2025 20:01:06 +0000 (0:00:00.314) 0:05:17.198 *********** 2025-06-02 20:06:51.445416 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:06:51.445424 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:06:51.445430 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:06:51.445437 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-02 20:06:51.445443 | orchestrator | 2025-06-02 20:06:51.445450 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-02 20:06:51.445457 | orchestrator | Monday 02 June 2025 20:01:18 +0000 (0:00:11.159) 0:05:28.358 *********** 2025-06-02 20:06:51.445463 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.445470 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.445476 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.445483 | orchestrator | 2025-06-02 20:06:51.445490 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-02 20:06:51.445496 | orchestrator | Monday 02 June 2025 20:01:18 +0000 (0:00:00.303) 0:05:28.661 *********** 2025-06-02 20:06:51.445503 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 20:06:51.445510 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 20:06:51.445516 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 20:06:51.445523 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 20:06:51.445529 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.445536 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.445543 | orchestrator | 2025-06-02 20:06:51.445549 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-02 20:06:51.445558 | orchestrator | Monday 02 June 2025 20:01:20 +0000 (0:00:02.512) 0:05:31.173 *********** 2025-06-02 20:06:51.445603 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 20:06:51.445611 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 20:06:51.445618 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 20:06:51.445625 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:06:51.445631 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 20:06:51.445638 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 20:06:51.445645 | orchestrator | 2025-06-02 20:06:51.445651 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-02 20:06:51.445658 | orchestrator | Monday 02 June 2025 20:01:22 +0000 (0:00:01.310) 0:05:32.484 *********** 2025-06-02 20:06:51.445665 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.445671 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.445678 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.445684 | orchestrator | 2025-06-02 20:06:51.445691 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-02 20:06:51.445697 | orchestrator | Monday 02 June 2025 20:01:22 +0000 (0:00:00.688) 0:05:33.172 *********** 2025-06-02 20:06:51.445704 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.445711 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.445717 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.445724 | orchestrator | 2025-06-02 20:06:51.445731 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-02 20:06:51.445737 | orchestrator | Monday 02 June 2025 20:01:23 +0000 (0:00:00.256) 0:05:33.428 *********** 2025-06-02 20:06:51.445748 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.445754 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.445761 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.445768 | orchestrator | 2025-06-02 20:06:51.445774 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-02 20:06:51.445781 | orchestrator | Monday 02 June 2025 20:01:23 +0000 (0:00:00.257) 0:05:33.686 *********** 2025-06-02 20:06:51.445788 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.445794 | orchestrator | 2025-06-02 20:06:51.445801 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-02 20:06:51.445807 | orchestrator | Monday 02 June 2025 20:01:24 +0000 (0:00:00.639) 0:05:34.325 *********** 2025-06-02 20:06:51.445814 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.445820 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.445827 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.445834 | orchestrator | 2025-06-02 20:06:51.445840 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-02 20:06:51.445847 | orchestrator | Monday 02 June 2025 20:01:24 +0000 (0:00:00.359) 0:05:34.685 *********** 2025-06-02 20:06:51.445854 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.445860 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.445867 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.445873 | orchestrator | 2025-06-02 20:06:51.445948 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-02 20:06:51.445956 | orchestrator | Monday 02 June 2025 20:01:24 +0000 (0:00:00.276) 0:05:34.961 *********** 2025-06-02 20:06:51.445962 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.445969 | orchestrator | 2025-06-02 20:06:51.445976 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-02 20:06:51.445982 | orchestrator | Monday 02 June 2025 20:01:25 +0000 (0:00:00.582) 0:05:35.544 *********** 2025-06-02 20:06:51.445989 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.445995 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.446002 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.446009 | orchestrator | 2025-06-02 20:06:51.446037 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-02 20:06:51.446051 | orchestrator | Monday 02 June 2025 20:01:26 +0000 (0:00:01.299) 0:05:36.844 *********** 2025-06-02 20:06:51.446058 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.446064 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.446071 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.446077 | orchestrator | 2025-06-02 20:06:51.446084 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-02 20:06:51.446090 | orchestrator | Monday 02 June 2025 20:01:27 +0000 (0:00:01.147) 0:05:37.991 *********** 2025-06-02 20:06:51.446097 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.446103 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.446110 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.446116 | orchestrator | 2025-06-02 20:06:51.446123 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-02 20:06:51.446129 | orchestrator | Monday 02 June 2025 20:01:29 +0000 (0:00:02.065) 0:05:40.056 *********** 2025-06-02 20:06:51.446136 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.446142 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.446149 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.446155 | orchestrator | 2025-06-02 20:06:51.446162 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-02 20:06:51.446169 | orchestrator | Monday 02 June 2025 20:01:32 +0000 (0:00:02.920) 0:05:42.977 *********** 2025-06-02 20:06:51.446175 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.446182 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.446188 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-02 20:06:51.446195 | orchestrator | 2025-06-02 20:06:51.446202 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-02 20:06:51.446208 | orchestrator | Monday 02 June 2025 20:01:33 +0000 (0:00:00.392) 0:05:43.370 *********** 2025-06-02 20:06:51.446215 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-02 20:06:51.446222 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-02 20:06:51.446251 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-02 20:06:51.446259 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-02 20:06:51.446266 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-02 20:06:51.446273 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:06:51.446279 | orchestrator | 2025-06-02 20:06:51.446286 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-02 20:06:51.446292 | orchestrator | Monday 02 June 2025 20:02:03 +0000 (0:00:30.477) 0:06:13.847 *********** 2025-06-02 20:06:51.446299 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:06:51.446305 | orchestrator | 2025-06-02 20:06:51.446312 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-02 20:06:51.446319 | orchestrator | Monday 02 June 2025 20:02:05 +0000 (0:00:01.569) 0:06:15.417 *********** 2025-06-02 20:06:51.446325 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.446332 | orchestrator | 2025-06-02 20:06:51.446338 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-02 20:06:51.446345 | orchestrator | Monday 02 June 2025 20:02:06 +0000 (0:00:00.914) 0:06:16.332 *********** 2025-06-02 20:06:51.446351 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.446358 | orchestrator | 2025-06-02 20:06:51.446369 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-02 20:06:51.446376 | orchestrator | Monday 02 June 2025 20:02:06 +0000 (0:00:00.163) 0:06:16.496 *********** 2025-06-02 20:06:51.446382 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-02 20:06:51.446394 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-02 20:06:51.446400 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-02 20:06:51.446407 | orchestrator | 2025-06-02 20:06:51.446413 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-02 20:06:51.446420 | orchestrator | Monday 02 June 2025 20:02:12 +0000 (0:00:06.562) 0:06:23.058 *********** 2025-06-02 20:06:51.446426 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-02 20:06:51.446433 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-02 20:06:51.446440 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-02 20:06:51.446446 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-02 20:06:51.446453 | orchestrator | 2025-06-02 20:06:51.446459 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:06:51.446466 | orchestrator | Monday 02 June 2025 20:02:17 +0000 (0:00:04.857) 0:06:27.916 *********** 2025-06-02 20:06:51.446472 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.446479 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.446485 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.446492 | orchestrator | 2025-06-02 20:06:51.446498 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 20:06:51.446505 | orchestrator | Monday 02 June 2025 20:02:18 +0000 (0:00:00.944) 0:06:28.860 *********** 2025-06-02 20:06:51.446512 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:06:51.446518 | orchestrator | 2025-06-02 20:06:51.446525 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 20:06:51.446531 | orchestrator | Monday 02 June 2025 20:02:19 +0000 (0:00:00.580) 0:06:29.441 *********** 2025-06-02 20:06:51.446538 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.446545 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.446551 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.446558 | orchestrator | 2025-06-02 20:06:51.446564 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 20:06:51.446571 | orchestrator | Monday 02 June 2025 20:02:19 +0000 (0:00:00.309) 0:06:29.750 *********** 2025-06-02 20:06:51.446577 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.446584 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.446591 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.446597 | orchestrator | 2025-06-02 20:06:51.446604 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 20:06:51.446610 | orchestrator | Monday 02 June 2025 20:02:21 +0000 (0:00:01.592) 0:06:31.342 *********** 2025-06-02 20:06:51.446617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 20:06:51.446623 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 20:06:51.446630 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 20:06:51.446636 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.446643 | orchestrator | 2025-06-02 20:06:51.446649 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 20:06:51.446656 | orchestrator | Monday 02 June 2025 20:02:21 +0000 (0:00:00.634) 0:06:31.977 *********** 2025-06-02 20:06:51.446663 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.446669 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.446676 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.446682 | orchestrator | 2025-06-02 20:06:51.446689 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-02 20:06:51.446695 | orchestrator | 2025-06-02 20:06:51.446702 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:06:51.446708 | orchestrator | Monday 02 June 2025 20:02:22 +0000 (0:00:00.527) 0:06:32.504 *********** 2025-06-02 20:06:51.446720 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.446726 | orchestrator | 2025-06-02 20:06:51.446733 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:06:51.446760 | orchestrator | Monday 02 June 2025 20:02:22 +0000 (0:00:00.717) 0:06:33.222 *********** 2025-06-02 20:06:51.446768 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.446775 | orchestrator | 2025-06-02 20:06:51.446782 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:06:51.446788 | orchestrator | Monday 02 June 2025 20:02:23 +0000 (0:00:00.535) 0:06:33.758 *********** 2025-06-02 20:06:51.446795 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.446801 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.446808 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.446815 | orchestrator | 2025-06-02 20:06:51.446821 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:06:51.446828 | orchestrator | Monday 02 June 2025 20:02:23 +0000 (0:00:00.296) 0:06:34.054 *********** 2025-06-02 20:06:51.446835 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.446841 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.446848 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.446854 | orchestrator | 2025-06-02 20:06:51.446861 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:06:51.446867 | orchestrator | Monday 02 June 2025 20:02:24 +0000 (0:00:00.952) 0:06:35.007 *********** 2025-06-02 20:06:51.446874 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.446893 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.446900 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.446907 | orchestrator | 2025-06-02 20:06:51.446917 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:06:51.446924 | orchestrator | Monday 02 June 2025 20:02:25 +0000 (0:00:00.633) 0:06:35.640 *********** 2025-06-02 20:06:51.446931 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.446937 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.446944 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.446950 | orchestrator | 2025-06-02 20:06:51.446957 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:06:51.446963 | orchestrator | Monday 02 June 2025 20:02:26 +0000 (0:00:00.680) 0:06:36.321 *********** 2025-06-02 20:06:51.446970 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.446976 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.446983 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.446990 | orchestrator | 2025-06-02 20:06:51.446996 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:06:51.447003 | orchestrator | Monday 02 June 2025 20:02:26 +0000 (0:00:00.299) 0:06:36.621 *********** 2025-06-02 20:06:51.447009 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.447016 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.447022 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.447029 | orchestrator | 2025-06-02 20:06:51.447035 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:06:51.447042 | orchestrator | Monday 02 June 2025 20:02:26 +0000 (0:00:00.566) 0:06:37.187 *********** 2025-06-02 20:06:51.447049 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.447055 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.447062 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.447068 | orchestrator | 2025-06-02 20:06:51.447075 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:06:51.447081 | orchestrator | Monday 02 June 2025 20:02:27 +0000 (0:00:00.314) 0:06:37.502 *********** 2025-06-02 20:06:51.447088 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.447094 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.447106 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.447112 | orchestrator | 2025-06-02 20:06:51.447119 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:06:51.447125 | orchestrator | Monday 02 June 2025 20:02:27 +0000 (0:00:00.671) 0:06:38.173 *********** 2025-06-02 20:06:51.447132 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.447139 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.447145 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.447152 | orchestrator | 2025-06-02 20:06:51.447158 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:06:51.447165 | orchestrator | Monday 02 June 2025 20:02:28 +0000 (0:00:00.692) 0:06:38.866 *********** 2025-06-02 20:06:51.447172 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.447178 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.447185 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.447191 | orchestrator | 2025-06-02 20:06:51.447198 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:06:51.447205 | orchestrator | Monday 02 June 2025 20:02:29 +0000 (0:00:00.575) 0:06:39.441 *********** 2025-06-02 20:06:51.447211 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.447218 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.447224 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.447231 | orchestrator | 2025-06-02 20:06:51.447237 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:06:51.447244 | orchestrator | Monday 02 June 2025 20:02:29 +0000 (0:00:00.355) 0:06:39.797 *********** 2025-06-02 20:06:51.447250 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.447257 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.447263 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.447270 | orchestrator | 2025-06-02 20:06:51.447276 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:06:51.447283 | orchestrator | Monday 02 June 2025 20:02:29 +0000 (0:00:00.337) 0:06:40.135 *********** 2025-06-02 20:06:51.447290 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.447296 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.447303 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.447309 | orchestrator | 2025-06-02 20:06:51.447315 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:06:51.447322 | orchestrator | Monday 02 June 2025 20:02:30 +0000 (0:00:00.344) 0:06:40.479 *********** 2025-06-02 20:06:51.447329 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.447335 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.447342 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.447348 | orchestrator | 2025-06-02 20:06:51.447355 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:06:51.447362 | orchestrator | Monday 02 June 2025 20:02:30 +0000 (0:00:00.648) 0:06:41.127 *********** 2025-06-02 20:06:51.447372 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.447379 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.447386 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.447392 | orchestrator | 2025-06-02 20:06:51.447399 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:06:51.447405 | orchestrator | Monday 02 June 2025 20:02:31 +0000 (0:00:00.348) 0:06:41.476 *********** 2025-06-02 20:06:51.447412 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.447419 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.447425 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.447432 | orchestrator | 2025-06-02 20:06:51.447438 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:06:51.447445 | orchestrator | Monday 02 June 2025 20:02:31 +0000 (0:00:00.338) 0:06:41.814 *********** 2025-06-02 20:06:51.447451 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.447458 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.447464 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.447471 | orchestrator | 2025-06-02 20:06:51.447478 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:06:51.447488 | orchestrator | Monday 02 June 2025 20:02:31 +0000 (0:00:00.298) 0:06:42.112 *********** 2025-06-02 20:06:51.447495 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.447502 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.447508 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.447515 | orchestrator | 2025-06-02 20:06:51.447521 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:06:51.447531 | orchestrator | Monday 02 June 2025 20:02:32 +0000 (0:00:00.609) 0:06:42.722 *********** 2025-06-02 20:06:51.447538 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.447544 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.447551 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.447557 | orchestrator | 2025-06-02 20:06:51.447564 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-02 20:06:51.447570 | orchestrator | Monday 02 June 2025 20:02:32 +0000 (0:00:00.558) 0:06:43.281 *********** 2025-06-02 20:06:51.447577 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.447583 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.447590 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.447596 | orchestrator | 2025-06-02 20:06:51.447603 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-02 20:06:51.447610 | orchestrator | Monday 02 June 2025 20:02:33 +0000 (0:00:00.313) 0:06:43.594 *********** 2025-06-02 20:06:51.447616 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 20:06:51.447623 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:06:51.447629 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:06:51.447636 | orchestrator | 2025-06-02 20:06:51.447642 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-02 20:06:51.447649 | orchestrator | Monday 02 June 2025 20:02:34 +0000 (0:00:00.905) 0:06:44.499 *********** 2025-06-02 20:06:51.447655 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.447662 | orchestrator | 2025-06-02 20:06:51.447668 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-02 20:06:51.447675 | orchestrator | Monday 02 June 2025 20:02:35 +0000 (0:00:00.866) 0:06:45.366 *********** 2025-06-02 20:06:51.447681 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.447688 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.447695 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.447701 | orchestrator | 2025-06-02 20:06:51.447708 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-02 20:06:51.447714 | orchestrator | Monday 02 June 2025 20:02:35 +0000 (0:00:00.352) 0:06:45.718 *********** 2025-06-02 20:06:51.447721 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.447727 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.447734 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.447740 | orchestrator | 2025-06-02 20:06:51.447747 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-02 20:06:51.447753 | orchestrator | Monday 02 June 2025 20:02:35 +0000 (0:00:00.293) 0:06:46.012 *********** 2025-06-02 20:06:51.447760 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.447766 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.447773 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.447779 | orchestrator | 2025-06-02 20:06:51.447786 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-02 20:06:51.447792 | orchestrator | Monday 02 June 2025 20:02:36 +0000 (0:00:00.998) 0:06:47.010 *********** 2025-06-02 20:06:51.447799 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.447805 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.447812 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.447818 | orchestrator | 2025-06-02 20:06:51.447825 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-02 20:06:51.447836 | orchestrator | Monday 02 June 2025 20:02:37 +0000 (0:00:00.332) 0:06:47.343 *********** 2025-06-02 20:06:51.447842 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 20:06:51.447849 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 20:06:51.447855 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 20:06:51.447862 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 20:06:51.447869 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 20:06:51.447875 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 20:06:51.447894 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 20:06:51.447908 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 20:06:51.447921 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 20:06:51.447931 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 20:06:51.447938 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 20:06:51.447944 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 20:06:51.447951 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 20:06:51.447957 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 20:06:51.447964 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 20:06:51.447970 | orchestrator | 2025-06-02 20:06:51.447977 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-02 20:06:51.447983 | orchestrator | Monday 02 June 2025 20:02:40 +0000 (0:00:03.191) 0:06:50.535 *********** 2025-06-02 20:06:51.447990 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.447996 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.448003 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.448009 | orchestrator | 2025-06-02 20:06:51.448019 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-02 20:06:51.448026 | orchestrator | Monday 02 June 2025 20:02:40 +0000 (0:00:00.358) 0:06:50.893 *********** 2025-06-02 20:06:51.448032 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.448039 | orchestrator | 2025-06-02 20:06:51.448046 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-02 20:06:51.448052 | orchestrator | Monday 02 June 2025 20:02:41 +0000 (0:00:00.752) 0:06:51.646 *********** 2025-06-02 20:06:51.448059 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 20:06:51.448065 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 20:06:51.448072 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 20:06:51.448078 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-02 20:06:51.448085 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-02 20:06:51.448091 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-02 20:06:51.448098 | orchestrator | 2025-06-02 20:06:51.448104 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-02 20:06:51.448111 | orchestrator | Monday 02 June 2025 20:02:42 +0000 (0:00:01.047) 0:06:52.694 *********** 2025-06-02 20:06:51.448117 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.448123 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:06:51.448135 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:06:51.448141 | orchestrator | 2025-06-02 20:06:51.448148 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-02 20:06:51.448154 | orchestrator | Monday 02 June 2025 20:02:44 +0000 (0:00:02.145) 0:06:54.840 *********** 2025-06-02 20:06:51.448161 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:06:51.448167 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:06:51.448174 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.448180 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:06:51.448187 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 20:06:51.448193 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.448200 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:06:51.448206 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 20:06:51.448213 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.448220 | orchestrator | 2025-06-02 20:06:51.448226 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-02 20:06:51.448232 | orchestrator | Monday 02 June 2025 20:02:46 +0000 (0:00:01.450) 0:06:56.290 *********** 2025-06-02 20:06:51.448239 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:06:51.448245 | orchestrator | 2025-06-02 20:06:51.448252 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-02 20:06:51.448258 | orchestrator | Monday 02 June 2025 20:02:48 +0000 (0:00:02.261) 0:06:58.551 *********** 2025-06-02 20:06:51.448265 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.448271 | orchestrator | 2025-06-02 20:06:51.448278 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-02 20:06:51.448284 | orchestrator | Monday 02 June 2025 20:02:48 +0000 (0:00:00.539) 0:06:59.090 *********** 2025-06-02 20:06:51.448291 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bdb59653-b88e-5628-a878-3ed7677d43f1', 'data_vg': 'ceph-bdb59653-b88e-5628-a878-3ed7677d43f1'}) 2025-06-02 20:06:51.448299 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-86208513-8fbd-535b-80fd-915c228be133', 'data_vg': 'ceph-86208513-8fbd-535b-80fd-915c228be133'}) 2025-06-02 20:06:51.448305 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-93e9f309-356a-50f8-bf6b-26db11b00033', 'data_vg': 'ceph-93e9f309-356a-50f8-bf6b-26db11b00033'}) 2025-06-02 20:06:51.448317 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ee20b18c-4531-5b6f-acaf-50beaceb257d', 'data_vg': 'ceph-ee20b18c-4531-5b6f-acaf-50beaceb257d'}) 2025-06-02 20:06:51.448324 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed769c7c-5756-52eb-9583-a607cefce370', 'data_vg': 'ceph-ed769c7c-5756-52eb-9583-a607cefce370'}) 2025-06-02 20:06:51.448330 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-01a13ba8-1f69-5051-bec5-e01e7e9b87e5', 'data_vg': 'ceph-01a13ba8-1f69-5051-bec5-e01e7e9b87e5'}) 2025-06-02 20:06:51.448337 | orchestrator | 2025-06-02 20:06:51.448343 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-02 20:06:51.448350 | orchestrator | Monday 02 June 2025 20:03:29 +0000 (0:00:40.689) 0:07:39.780 *********** 2025-06-02 20:06:51.448356 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.448363 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.448369 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.448376 | orchestrator | 2025-06-02 20:06:51.448382 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-02 20:06:51.448389 | orchestrator | Monday 02 June 2025 20:03:30 +0000 (0:00:00.534) 0:07:40.315 *********** 2025-06-02 20:06:51.448395 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.448402 | orchestrator | 2025-06-02 20:06:51.448413 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-02 20:06:51.448420 | orchestrator | Monday 02 June 2025 20:03:30 +0000 (0:00:00.570) 0:07:40.885 *********** 2025-06-02 20:06:51.448426 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.448433 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.448439 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.448446 | orchestrator | 2025-06-02 20:06:51.448452 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-02 20:06:51.448459 | orchestrator | Monday 02 June 2025 20:03:31 +0000 (0:00:00.761) 0:07:41.647 *********** 2025-06-02 20:06:51.448465 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.448472 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.448478 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.448484 | orchestrator | 2025-06-02 20:06:51.448491 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-02 20:06:51.448497 | orchestrator | Monday 02 June 2025 20:03:34 +0000 (0:00:02.826) 0:07:44.473 *********** 2025-06-02 20:06:51.448504 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.448510 | orchestrator | 2025-06-02 20:06:51.448517 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-02 20:06:51.448523 | orchestrator | Monday 02 June 2025 20:03:34 +0000 (0:00:00.565) 0:07:45.038 *********** 2025-06-02 20:06:51.448530 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.448536 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.448543 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.448549 | orchestrator | 2025-06-02 20:06:51.448556 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-02 20:06:51.448563 | orchestrator | Monday 02 June 2025 20:03:36 +0000 (0:00:01.322) 0:07:46.361 *********** 2025-06-02 20:06:51.448569 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.448576 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.448582 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.448589 | orchestrator | 2025-06-02 20:06:51.448595 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-02 20:06:51.448601 | orchestrator | Monday 02 June 2025 20:03:37 +0000 (0:00:01.305) 0:07:47.667 *********** 2025-06-02 20:06:51.448608 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.448614 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.448621 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.448627 | orchestrator | 2025-06-02 20:06:51.448634 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-02 20:06:51.448641 | orchestrator | Monday 02 June 2025 20:03:39 +0000 (0:00:01.622) 0:07:49.289 *********** 2025-06-02 20:06:51.448647 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.448653 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.448660 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.448666 | orchestrator | 2025-06-02 20:06:51.448673 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-02 20:06:51.448679 | orchestrator | Monday 02 June 2025 20:03:39 +0000 (0:00:00.324) 0:07:49.613 *********** 2025-06-02 20:06:51.448685 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.448692 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.448698 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.448705 | orchestrator | 2025-06-02 20:06:51.448711 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-02 20:06:51.448718 | orchestrator | Monday 02 June 2025 20:03:39 +0000 (0:00:00.308) 0:07:49.922 *********** 2025-06-02 20:06:51.448755 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-06-02 20:06:51.448762 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 20:06:51.448768 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-06-02 20:06:51.448775 | orchestrator | ok: [testbed-node-3] => (item=2) 2025-06-02 20:06:51.448781 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-06-02 20:06:51.448796 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-06-02 20:06:51.448802 | orchestrator | 2025-06-02 20:06:51.448809 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-02 20:06:51.448815 | orchestrator | Monday 02 June 2025 20:03:40 +0000 (0:00:01.185) 0:07:51.108 *********** 2025-06-02 20:06:51.448822 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-02 20:06:51.448828 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-06-02 20:06:51.448835 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-06-02 20:06:51.448841 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-02 20:06:51.448848 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-06-02 20:06:51.448854 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-02 20:06:51.448861 | orchestrator | 2025-06-02 20:06:51.448867 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-02 20:06:51.448916 | orchestrator | Monday 02 June 2025 20:03:42 +0000 (0:00:02.001) 0:07:53.110 *********** 2025-06-02 20:06:51.448925 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-02 20:06:51.448932 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-06-02 20:06:51.448938 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-06-02 20:06:51.448945 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-02 20:06:51.448951 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-06-02 20:06:51.448958 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-02 20:06:51.448964 | orchestrator | 2025-06-02 20:06:51.448971 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-02 20:06:51.448978 | orchestrator | Monday 02 June 2025 20:03:46 +0000 (0:00:03.298) 0:07:56.409 *********** 2025-06-02 20:06:51.448984 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.448991 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.448997 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:06:51.449004 | orchestrator | 2025-06-02 20:06:51.449010 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-02 20:06:51.449017 | orchestrator | Monday 02 June 2025 20:03:49 +0000 (0:00:03.241) 0:07:59.650 *********** 2025-06-02 20:06:51.449023 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449030 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.449036 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-02 20:06:51.449043 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:06:51.449050 | orchestrator | 2025-06-02 20:06:51.449061 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-02 20:06:51.449068 | orchestrator | Monday 02 June 2025 20:04:02 +0000 (0:00:13.030) 0:08:12.681 *********** 2025-06-02 20:06:51.449074 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449081 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.449088 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.449094 | orchestrator | 2025-06-02 20:06:51.449101 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:06:51.449107 | orchestrator | Monday 02 June 2025 20:04:03 +0000 (0:00:00.859) 0:08:13.541 *********** 2025-06-02 20:06:51.449114 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449120 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.449127 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.449133 | orchestrator | 2025-06-02 20:06:51.449140 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 20:06:51.449146 | orchestrator | Monday 02 June 2025 20:04:03 +0000 (0:00:00.592) 0:08:14.134 *********** 2025-06-02 20:06:51.449153 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.449159 | orchestrator | 2025-06-02 20:06:51.449166 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 20:06:51.449173 | orchestrator | Monday 02 June 2025 20:04:04 +0000 (0:00:00.559) 0:08:14.694 *********** 2025-06-02 20:06:51.449184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:06:51.449191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:06:51.449198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:06:51.449204 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449211 | orchestrator | 2025-06-02 20:06:51.449217 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 20:06:51.449224 | orchestrator | Monday 02 June 2025 20:04:04 +0000 (0:00:00.365) 0:08:15.060 *********** 2025-06-02 20:06:51.449230 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449237 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.449243 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.449250 | orchestrator | 2025-06-02 20:06:51.449256 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 20:06:51.449263 | orchestrator | Monday 02 June 2025 20:04:05 +0000 (0:00:00.292) 0:08:15.352 *********** 2025-06-02 20:06:51.449269 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449276 | orchestrator | 2025-06-02 20:06:51.449282 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 20:06:51.449289 | orchestrator | Monday 02 June 2025 20:04:05 +0000 (0:00:00.228) 0:08:15.581 *********** 2025-06-02 20:06:51.449295 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449302 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.449308 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.449315 | orchestrator | 2025-06-02 20:06:51.449321 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 20:06:51.449328 | orchestrator | Monday 02 June 2025 20:04:05 +0000 (0:00:00.590) 0:08:16.171 *********** 2025-06-02 20:06:51.449334 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449341 | orchestrator | 2025-06-02 20:06:51.449347 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 20:06:51.449354 | orchestrator | Monday 02 June 2025 20:04:06 +0000 (0:00:00.228) 0:08:16.400 *********** 2025-06-02 20:06:51.449360 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449367 | orchestrator | 2025-06-02 20:06:51.449373 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 20:06:51.449380 | orchestrator | Monday 02 June 2025 20:04:06 +0000 (0:00:00.219) 0:08:16.620 *********** 2025-06-02 20:06:51.449386 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449393 | orchestrator | 2025-06-02 20:06:51.449399 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 20:06:51.449406 | orchestrator | Monday 02 June 2025 20:04:06 +0000 (0:00:00.123) 0:08:16.743 *********** 2025-06-02 20:06:51.449413 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449419 | orchestrator | 2025-06-02 20:06:51.449426 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 20:06:51.449432 | orchestrator | Monday 02 June 2025 20:04:06 +0000 (0:00:00.220) 0:08:16.963 *********** 2025-06-02 20:06:51.449439 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449445 | orchestrator | 2025-06-02 20:06:51.449456 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 20:06:51.449463 | orchestrator | Monday 02 June 2025 20:04:06 +0000 (0:00:00.218) 0:08:17.182 *********** 2025-06-02 20:06:51.449469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:06:51.449475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:06:51.449481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:06:51.449487 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449493 | orchestrator | 2025-06-02 20:06:51.449499 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 20:06:51.449505 | orchestrator | Monday 02 June 2025 20:04:07 +0000 (0:00:00.379) 0:08:17.561 *********** 2025-06-02 20:06:51.449511 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449522 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.449528 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.449534 | orchestrator | 2025-06-02 20:06:51.449540 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 20:06:51.449546 | orchestrator | Monday 02 June 2025 20:04:07 +0000 (0:00:00.284) 0:08:17.846 *********** 2025-06-02 20:06:51.449552 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449558 | orchestrator | 2025-06-02 20:06:51.449564 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 20:06:51.449570 | orchestrator | Monday 02 June 2025 20:04:08 +0000 (0:00:00.789) 0:08:18.635 *********** 2025-06-02 20:06:51.449576 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449582 | orchestrator | 2025-06-02 20:06:51.449591 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-02 20:06:51.449598 | orchestrator | 2025-06-02 20:06:51.449604 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:06:51.449610 | orchestrator | Monday 02 June 2025 20:04:09 +0000 (0:00:00.659) 0:08:19.295 *********** 2025-06-02 20:06:51.449616 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.449623 | orchestrator | 2025-06-02 20:06:51.449629 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:06:51.449635 | orchestrator | Monday 02 June 2025 20:04:10 +0000 (0:00:01.184) 0:08:20.479 *********** 2025-06-02 20:06:51.449641 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.449647 | orchestrator | 2025-06-02 20:06:51.449653 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:06:51.449659 | orchestrator | Monday 02 June 2025 20:04:11 +0000 (0:00:01.266) 0:08:21.746 *********** 2025-06-02 20:06:51.449665 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449671 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.449677 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.449683 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.449690 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.449696 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.449702 | orchestrator | 2025-06-02 20:06:51.449708 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:06:51.449714 | orchestrator | Monday 02 June 2025 20:04:12 +0000 (0:00:01.045) 0:08:22.791 *********** 2025-06-02 20:06:51.449720 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.449726 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.449732 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.449738 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.449744 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.449751 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.449757 | orchestrator | 2025-06-02 20:06:51.449763 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:06:51.449769 | orchestrator | Monday 02 June 2025 20:04:13 +0000 (0:00:01.010) 0:08:23.802 *********** 2025-06-02 20:06:51.449775 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.449781 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.449787 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.449793 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.449799 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.449805 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.449811 | orchestrator | 2025-06-02 20:06:51.449817 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:06:51.449823 | orchestrator | Monday 02 June 2025 20:04:14 +0000 (0:00:01.235) 0:08:25.038 *********** 2025-06-02 20:06:51.449830 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.449836 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.449846 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.449852 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.449858 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.449864 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.449870 | orchestrator | 2025-06-02 20:06:51.449876 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:06:51.449897 | orchestrator | Monday 02 June 2025 20:04:15 +0000 (0:00:01.050) 0:08:26.089 *********** 2025-06-02 20:06:51.449904 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449910 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.449916 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.449922 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.449928 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.449934 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.449940 | orchestrator | 2025-06-02 20:06:51.449946 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:06:51.449952 | orchestrator | Monday 02 June 2025 20:04:16 +0000 (0:00:00.830) 0:08:26.919 *********** 2025-06-02 20:06:51.449958 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.449964 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.449970 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.449976 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.449982 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.449988 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.449995 | orchestrator | 2025-06-02 20:06:51.450005 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:06:51.450011 | orchestrator | Monday 02 June 2025 20:04:17 +0000 (0:00:00.618) 0:08:27.537 *********** 2025-06-02 20:06:51.450039 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.450046 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.450052 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.450058 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.450064 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.450070 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.450077 | orchestrator | 2025-06-02 20:06:51.450083 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:06:51.450089 | orchestrator | Monday 02 June 2025 20:04:18 +0000 (0:00:00.848) 0:08:28.385 *********** 2025-06-02 20:06:51.450095 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.450101 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.450107 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.450113 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.450119 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.450125 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.450131 | orchestrator | 2025-06-02 20:06:51.450138 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:06:51.450144 | orchestrator | Monday 02 June 2025 20:04:19 +0000 (0:00:01.032) 0:08:29.418 *********** 2025-06-02 20:06:51.450150 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.450156 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.450162 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.450168 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.450174 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.450180 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.450186 | orchestrator | 2025-06-02 20:06:51.450195 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:06:51.450202 | orchestrator | Monday 02 June 2025 20:04:20 +0000 (0:00:01.548) 0:08:30.966 *********** 2025-06-02 20:06:51.450208 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.450214 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.450220 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.450226 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.450232 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.450238 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.450250 | orchestrator | 2025-06-02 20:06:51.450256 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:06:51.450262 | orchestrator | Monday 02 June 2025 20:04:21 +0000 (0:00:00.588) 0:08:31.555 *********** 2025-06-02 20:06:51.450268 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.450274 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.450281 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.450287 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.450293 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.450299 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.450305 | orchestrator | 2025-06-02 20:06:51.450311 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:06:51.450317 | orchestrator | Monday 02 June 2025 20:04:22 +0000 (0:00:00.862) 0:08:32.418 *********** 2025-06-02 20:06:51.450323 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.450329 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.450335 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.450341 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.450348 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.450354 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.450360 | orchestrator | 2025-06-02 20:06:51.450366 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:06:51.450372 | orchestrator | Monday 02 June 2025 20:04:22 +0000 (0:00:00.686) 0:08:33.104 *********** 2025-06-02 20:06:51.450378 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.450384 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.450390 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.450397 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.450403 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.450409 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.450415 | orchestrator | 2025-06-02 20:06:51.450421 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:06:51.450430 | orchestrator | Monday 02 June 2025 20:04:23 +0000 (0:00:00.899) 0:08:34.003 *********** 2025-06-02 20:06:51.450439 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.450449 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.450459 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.450469 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.450478 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.450486 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.450495 | orchestrator | 2025-06-02 20:06:51.450504 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:06:51.450513 | orchestrator | Monday 02 June 2025 20:04:24 +0000 (0:00:00.670) 0:08:34.673 *********** 2025-06-02 20:06:51.450523 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.450532 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.450541 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.450550 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.450560 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.450570 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.450580 | orchestrator | 2025-06-02 20:06:51.450591 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:06:51.450598 | orchestrator | Monday 02 June 2025 20:04:25 +0000 (0:00:00.806) 0:08:35.480 *********** 2025-06-02 20:06:51.450604 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:06:51.450610 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:06:51.450616 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:06:51.450622 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.450628 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.450634 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.450640 | orchestrator | 2025-06-02 20:06:51.450646 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:06:51.450652 | orchestrator | Monday 02 June 2025 20:04:25 +0000 (0:00:00.662) 0:08:36.142 *********** 2025-06-02 20:06:51.450666 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.450672 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.450678 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.450684 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.450690 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.450696 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.450702 | orchestrator | 2025-06-02 20:06:51.450713 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:06:51.450719 | orchestrator | Monday 02 June 2025 20:04:26 +0000 (0:00:00.826) 0:08:36.969 *********** 2025-06-02 20:06:51.450725 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.450731 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.450737 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.450743 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.450749 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.450755 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.450761 | orchestrator | 2025-06-02 20:06:51.450767 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:06:51.450773 | orchestrator | Monday 02 June 2025 20:04:27 +0000 (0:00:00.637) 0:08:37.607 *********** 2025-06-02 20:06:51.450779 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.450785 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.450791 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.450797 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.450803 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.450809 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.450815 | orchestrator | 2025-06-02 20:06:51.450821 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-02 20:06:51.450827 | orchestrator | Monday 02 June 2025 20:04:28 +0000 (0:00:01.217) 0:08:38.824 *********** 2025-06-02 20:06:51.450833 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.450839 | orchestrator | 2025-06-02 20:06:51.450846 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-02 20:06:51.450856 | orchestrator | Monday 02 June 2025 20:04:32 +0000 (0:00:03.937) 0:08:42.761 *********** 2025-06-02 20:06:51.450862 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.450868 | orchestrator | 2025-06-02 20:06:51.450874 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-02 20:06:51.450917 | orchestrator | Monday 02 June 2025 20:04:34 +0000 (0:00:02.089) 0:08:44.851 *********** 2025-06-02 20:06:51.450925 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.450931 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.450937 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.450943 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.450949 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.450955 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.450961 | orchestrator | 2025-06-02 20:06:51.450967 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-02 20:06:51.450973 | orchestrator | Monday 02 June 2025 20:04:36 +0000 (0:00:01.918) 0:08:46.769 *********** 2025-06-02 20:06:51.450979 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.450985 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.450991 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.450997 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.451003 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.451009 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.451015 | orchestrator | 2025-06-02 20:06:51.451021 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-02 20:06:51.451027 | orchestrator | Monday 02 June 2025 20:04:37 +0000 (0:00:00.947) 0:08:47.717 *********** 2025-06-02 20:06:51.451034 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.451041 | orchestrator | 2025-06-02 20:06:51.451047 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-02 20:06:51.451059 | orchestrator | Monday 02 June 2025 20:04:38 +0000 (0:00:01.183) 0:08:48.900 *********** 2025-06-02 20:06:51.451065 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.451071 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.451077 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.451083 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.451090 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.451096 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.451102 | orchestrator | 2025-06-02 20:06:51.451108 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-02 20:06:51.451114 | orchestrator | Monday 02 June 2025 20:04:40 +0000 (0:00:01.903) 0:08:50.803 *********** 2025-06-02 20:06:51.451120 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.451126 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.451132 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.451138 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.451144 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.451150 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.451156 | orchestrator | 2025-06-02 20:06:51.451162 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-02 20:06:51.451168 | orchestrator | Monday 02 June 2025 20:04:43 +0000 (0:00:03.292) 0:08:54.096 *********** 2025-06-02 20:06:51.451174 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.451181 | orchestrator | 2025-06-02 20:06:51.451187 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-02 20:06:51.451193 | orchestrator | Monday 02 June 2025 20:04:45 +0000 (0:00:01.310) 0:08:55.406 *********** 2025-06-02 20:06:51.451199 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.451205 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.451211 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.451217 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.451223 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.451229 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.451235 | orchestrator | 2025-06-02 20:06:51.451241 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-02 20:06:51.451247 | orchestrator | Monday 02 June 2025 20:04:46 +0000 (0:00:00.953) 0:08:56.360 *********** 2025-06-02 20:06:51.451253 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:06:51.451259 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:06:51.451265 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:06:51.451271 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.451277 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.451283 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.451289 | orchestrator | 2025-06-02 20:06:51.451296 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-02 20:06:51.451311 | orchestrator | Monday 02 June 2025 20:04:48 +0000 (0:00:02.244) 0:08:58.605 *********** 2025-06-02 20:06:51.451317 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:06:51.451323 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:06:51.451329 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:06:51.451335 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.451341 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.451347 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.451353 | orchestrator | 2025-06-02 20:06:51.451360 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-02 20:06:51.451366 | orchestrator | 2025-06-02 20:06:51.451372 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:06:51.451378 | orchestrator | Monday 02 June 2025 20:04:49 +0000 (0:00:01.032) 0:08:59.638 *********** 2025-06-02 20:06:51.451384 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.451390 | orchestrator | 2025-06-02 20:06:51.451400 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:06:51.451406 | orchestrator | Monday 02 June 2025 20:04:49 +0000 (0:00:00.425) 0:09:00.063 *********** 2025-06-02 20:06:51.451413 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.451419 | orchestrator | 2025-06-02 20:06:51.451425 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:06:51.451435 | orchestrator | Monday 02 June 2025 20:04:50 +0000 (0:00:00.589) 0:09:00.653 *********** 2025-06-02 20:06:51.451441 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.451447 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.451454 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.451460 | orchestrator | 2025-06-02 20:06:51.451466 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:06:51.451471 | orchestrator | Monday 02 June 2025 20:04:50 +0000 (0:00:00.346) 0:09:00.999 *********** 2025-06-02 20:06:51.451476 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.451482 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.451487 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.451492 | orchestrator | 2025-06-02 20:06:51.451498 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:06:51.451503 | orchestrator | Monday 02 June 2025 20:04:51 +0000 (0:00:00.701) 0:09:01.701 *********** 2025-06-02 20:06:51.451509 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.451514 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.451519 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.451524 | orchestrator | 2025-06-02 20:06:51.451530 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:06:51.451535 | orchestrator | Monday 02 June 2025 20:04:52 +0000 (0:00:00.849) 0:09:02.550 *********** 2025-06-02 20:06:51.451540 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.451546 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.451551 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.451556 | orchestrator | 2025-06-02 20:06:51.451561 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:06:51.451567 | orchestrator | Monday 02 June 2025 20:04:52 +0000 (0:00:00.694) 0:09:03.245 *********** 2025-06-02 20:06:51.451572 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.451577 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.451583 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.451588 | orchestrator | 2025-06-02 20:06:51.451593 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:06:51.451599 | orchestrator | Monday 02 June 2025 20:04:53 +0000 (0:00:00.258) 0:09:03.503 *********** 2025-06-02 20:06:51.451604 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.451609 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.451615 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.451620 | orchestrator | 2025-06-02 20:06:51.451625 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:06:51.451631 | orchestrator | Monday 02 June 2025 20:04:53 +0000 (0:00:00.245) 0:09:03.748 *********** 2025-06-02 20:06:51.451636 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.451641 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.451646 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.451652 | orchestrator | 2025-06-02 20:06:51.451657 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:06:51.451662 | orchestrator | Monday 02 June 2025 20:04:53 +0000 (0:00:00.385) 0:09:04.133 *********** 2025-06-02 20:06:51.451668 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.451673 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.451678 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.451684 | orchestrator | 2025-06-02 20:06:51.451689 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:06:51.451694 | orchestrator | Monday 02 June 2025 20:04:54 +0000 (0:00:00.681) 0:09:04.814 *********** 2025-06-02 20:06:51.451704 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.451709 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.451715 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.451720 | orchestrator | 2025-06-02 20:06:51.451725 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:06:51.451731 | orchestrator | Monday 02 June 2025 20:04:55 +0000 (0:00:00.675) 0:09:05.490 *********** 2025-06-02 20:06:51.451736 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.451741 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.451746 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.451752 | orchestrator | 2025-06-02 20:06:51.451757 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:06:51.451762 | orchestrator | Monday 02 June 2025 20:04:55 +0000 (0:00:00.285) 0:09:05.776 *********** 2025-06-02 20:06:51.451768 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.451773 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.451778 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.451783 | orchestrator | 2025-06-02 20:06:51.451789 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:06:51.451794 | orchestrator | Monday 02 June 2025 20:04:55 +0000 (0:00:00.475) 0:09:06.251 *********** 2025-06-02 20:06:51.451802 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.451808 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.451813 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.451818 | orchestrator | 2025-06-02 20:06:51.451824 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:06:51.451829 | orchestrator | Monday 02 June 2025 20:04:56 +0000 (0:00:00.395) 0:09:06.647 *********** 2025-06-02 20:06:51.451834 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.451840 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.451845 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.451850 | orchestrator | 2025-06-02 20:06:51.451855 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:06:51.451861 | orchestrator | Monday 02 June 2025 20:04:56 +0000 (0:00:00.294) 0:09:06.941 *********** 2025-06-02 20:06:51.451866 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.451871 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.451877 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.451894 | orchestrator | 2025-06-02 20:06:51.451900 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:06:51.451905 | orchestrator | Monday 02 June 2025 20:04:56 +0000 (0:00:00.262) 0:09:07.204 *********** 2025-06-02 20:06:51.451911 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.451916 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.451922 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.451927 | orchestrator | 2025-06-02 20:06:51.451932 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:06:51.451938 | orchestrator | Monday 02 June 2025 20:04:57 +0000 (0:00:00.429) 0:09:07.633 *********** 2025-06-02 20:06:51.451946 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.451952 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.451957 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.451963 | orchestrator | 2025-06-02 20:06:51.451968 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:06:51.451973 | orchestrator | Monday 02 June 2025 20:04:57 +0000 (0:00:00.290) 0:09:07.924 *********** 2025-06-02 20:06:51.451979 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.451984 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.451990 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.451995 | orchestrator | 2025-06-02 20:06:51.452000 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:06:51.452006 | orchestrator | Monday 02 June 2025 20:04:58 +0000 (0:00:00.390) 0:09:08.314 *********** 2025-06-02 20:06:51.452011 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.452023 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.452028 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.452034 | orchestrator | 2025-06-02 20:06:51.452039 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:06:51.452044 | orchestrator | Monday 02 June 2025 20:04:58 +0000 (0:00:00.361) 0:09:08.676 *********** 2025-06-02 20:06:51.452050 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.452055 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.452060 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.452066 | orchestrator | 2025-06-02 20:06:51.452071 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-02 20:06:51.452076 | orchestrator | Monday 02 June 2025 20:04:59 +0000 (0:00:00.648) 0:09:09.324 *********** 2025-06-02 20:06:51.452082 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.452087 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.452093 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-02 20:06:51.452098 | orchestrator | 2025-06-02 20:06:51.452104 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-02 20:06:51.452109 | orchestrator | Monday 02 June 2025 20:04:59 +0000 (0:00:00.349) 0:09:09.673 *********** 2025-06-02 20:06:51.452114 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:06:51.452120 | orchestrator | 2025-06-02 20:06:51.452125 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-02 20:06:51.452130 | orchestrator | Monday 02 June 2025 20:05:01 +0000 (0:00:02.383) 0:09:12.057 *********** 2025-06-02 20:06:51.452138 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-02 20:06:51.452145 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.452151 | orchestrator | 2025-06-02 20:06:51.452156 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-02 20:06:51.452162 | orchestrator | Monday 02 June 2025 20:05:01 +0000 (0:00:00.208) 0:09:12.265 *********** 2025-06-02 20:06:51.452169 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:06:51.452180 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:06:51.452185 | orchestrator | 2025-06-02 20:06:51.452191 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-02 20:06:51.452196 | orchestrator | Monday 02 June 2025 20:05:10 +0000 (0:00:08.957) 0:09:21.223 *********** 2025-06-02 20:06:51.452202 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:06:51.452207 | orchestrator | 2025-06-02 20:06:51.452212 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-02 20:06:51.452218 | orchestrator | Monday 02 June 2025 20:05:14 +0000 (0:00:03.945) 0:09:25.168 *********** 2025-06-02 20:06:51.452226 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.452232 | orchestrator | 2025-06-02 20:06:51.452237 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-02 20:06:51.452242 | orchestrator | Monday 02 June 2025 20:05:15 +0000 (0:00:00.514) 0:09:25.682 *********** 2025-06-02 20:06:51.452248 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 20:06:51.452253 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 20:06:51.452258 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 20:06:51.452269 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-02 20:06:51.452274 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-02 20:06:51.452279 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-02 20:06:51.452285 | orchestrator | 2025-06-02 20:06:51.452290 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-02 20:06:51.452295 | orchestrator | Monday 02 June 2025 20:05:16 +0000 (0:00:01.084) 0:09:26.767 *********** 2025-06-02 20:06:51.452300 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.452306 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:06:51.452315 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:06:51.452320 | orchestrator | 2025-06-02 20:06:51.452325 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-02 20:06:51.452331 | orchestrator | Monday 02 June 2025 20:05:18 +0000 (0:00:02.443) 0:09:29.211 *********** 2025-06-02 20:06:51.452336 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:06:51.452341 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:06:51.452347 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.452352 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:06:51.452357 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 20:06:51.452363 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.452368 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:06:51.452373 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 20:06:51.452379 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.452384 | orchestrator | 2025-06-02 20:06:51.452389 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-02 20:06:51.452395 | orchestrator | Monday 02 June 2025 20:05:20 +0000 (0:00:01.498) 0:09:30.710 *********** 2025-06-02 20:06:51.452400 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.452405 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.452410 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.452416 | orchestrator | 2025-06-02 20:06:51.452421 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-02 20:06:51.452426 | orchestrator | Monday 02 June 2025 20:05:23 +0000 (0:00:02.690) 0:09:33.400 *********** 2025-06-02 20:06:51.452432 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.452437 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.452442 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.452447 | orchestrator | 2025-06-02 20:06:51.452453 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-02 20:06:51.452458 | orchestrator | Monday 02 June 2025 20:05:23 +0000 (0:00:00.315) 0:09:33.716 *********** 2025-06-02 20:06:51.452463 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.452469 | orchestrator | 2025-06-02 20:06:51.452474 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-02 20:06:51.452479 | orchestrator | Monday 02 June 2025 20:05:24 +0000 (0:00:00.789) 0:09:34.505 *********** 2025-06-02 20:06:51.452485 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.452490 | orchestrator | 2025-06-02 20:06:51.452495 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-02 20:06:51.452501 | orchestrator | Monday 02 June 2025 20:05:24 +0000 (0:00:00.525) 0:09:35.030 *********** 2025-06-02 20:06:51.452506 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.452511 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.452517 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.452522 | orchestrator | 2025-06-02 20:06:51.452527 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-02 20:06:51.452537 | orchestrator | Monday 02 June 2025 20:05:25 +0000 (0:00:01.213) 0:09:36.244 *********** 2025-06-02 20:06:51.452542 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.452547 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.452552 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.452558 | orchestrator | 2025-06-02 20:06:51.452563 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-02 20:06:51.452568 | orchestrator | Monday 02 June 2025 20:05:27 +0000 (0:00:01.503) 0:09:37.748 *********** 2025-06-02 20:06:51.452574 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.452579 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.452584 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.452589 | orchestrator | 2025-06-02 20:06:51.452595 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-02 20:06:51.452600 | orchestrator | Monday 02 June 2025 20:05:29 +0000 (0:00:01.763) 0:09:39.511 *********** 2025-06-02 20:06:51.452605 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.452610 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.452616 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.452621 | orchestrator | 2025-06-02 20:06:51.452626 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-02 20:06:51.452632 | orchestrator | Monday 02 June 2025 20:05:31 +0000 (0:00:01.938) 0:09:41.450 *********** 2025-06-02 20:06:51.452637 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.452645 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.452651 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.452656 | orchestrator | 2025-06-02 20:06:51.452661 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:06:51.452667 | orchestrator | Monday 02 June 2025 20:05:32 +0000 (0:00:01.453) 0:09:42.903 *********** 2025-06-02 20:06:51.452672 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.452677 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.452683 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.452688 | orchestrator | 2025-06-02 20:06:51.452693 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 20:06:51.452698 | orchestrator | Monday 02 June 2025 20:05:33 +0000 (0:00:00.701) 0:09:43.604 *********** 2025-06-02 20:06:51.452704 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.452709 | orchestrator | 2025-06-02 20:06:51.452715 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 20:06:51.452720 | orchestrator | Monday 02 June 2025 20:05:34 +0000 (0:00:00.741) 0:09:44.345 *********** 2025-06-02 20:06:51.452725 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.452730 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.452736 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.452741 | orchestrator | 2025-06-02 20:06:51.452746 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 20:06:51.452752 | orchestrator | Monday 02 June 2025 20:05:34 +0000 (0:00:00.317) 0:09:44.663 *********** 2025-06-02 20:06:51.452760 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.452765 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.452771 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.452776 | orchestrator | 2025-06-02 20:06:51.452781 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 20:06:51.452787 | orchestrator | Monday 02 June 2025 20:05:35 +0000 (0:00:01.219) 0:09:45.883 *********** 2025-06-02 20:06:51.452792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:06:51.452797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:06:51.452802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:06:51.452808 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.452813 | orchestrator | 2025-06-02 20:06:51.452823 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 20:06:51.452829 | orchestrator | Monday 02 June 2025 20:05:36 +0000 (0:00:00.897) 0:09:46.780 *********** 2025-06-02 20:06:51.452834 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.452839 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.452845 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.452850 | orchestrator | 2025-06-02 20:06:51.452855 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 20:06:51.452861 | orchestrator | 2025-06-02 20:06:51.452866 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 20:06:51.452871 | orchestrator | Monday 02 June 2025 20:05:37 +0000 (0:00:00.860) 0:09:47.641 *********** 2025-06-02 20:06:51.452877 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.452906 | orchestrator | 2025-06-02 20:06:51.452911 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 20:06:51.452917 | orchestrator | Monday 02 June 2025 20:05:37 +0000 (0:00:00.500) 0:09:48.141 *********** 2025-06-02 20:06:51.452922 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.452927 | orchestrator | 2025-06-02 20:06:51.452933 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 20:06:51.452938 | orchestrator | Monday 02 June 2025 20:05:38 +0000 (0:00:00.738) 0:09:48.879 *********** 2025-06-02 20:06:51.452943 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.452949 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.452954 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.452959 | orchestrator | 2025-06-02 20:06:51.452965 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 20:06:51.452970 | orchestrator | Monday 02 June 2025 20:05:38 +0000 (0:00:00.337) 0:09:49.217 *********** 2025-06-02 20:06:51.452976 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.452981 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.452986 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.452992 | orchestrator | 2025-06-02 20:06:51.452997 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 20:06:51.453002 | orchestrator | Monday 02 June 2025 20:05:39 +0000 (0:00:00.713) 0:09:49.930 *********** 2025-06-02 20:06:51.453007 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.453013 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.453018 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.453023 | orchestrator | 2025-06-02 20:06:51.453029 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 20:06:51.453034 | orchestrator | Monday 02 June 2025 20:05:40 +0000 (0:00:00.720) 0:09:50.651 *********** 2025-06-02 20:06:51.453039 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.453045 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.453050 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.453055 | orchestrator | 2025-06-02 20:06:51.453064 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 20:06:51.453073 | orchestrator | Monday 02 June 2025 20:05:41 +0000 (0:00:01.028) 0:09:51.679 *********** 2025-06-02 20:06:51.453082 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.453091 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.453100 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.453110 | orchestrator | 2025-06-02 20:06:51.453117 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 20:06:51.453122 | orchestrator | Monday 02 June 2025 20:05:41 +0000 (0:00:00.332) 0:09:52.011 *********** 2025-06-02 20:06:51.453127 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.453133 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.453138 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.453143 | orchestrator | 2025-06-02 20:06:51.453153 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 20:06:51.453163 | orchestrator | Monday 02 June 2025 20:05:42 +0000 (0:00:00.292) 0:09:52.304 *********** 2025-06-02 20:06:51.453168 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.453174 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.453179 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.453184 | orchestrator | 2025-06-02 20:06:51.453190 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 20:06:51.453195 | orchestrator | Monday 02 June 2025 20:05:42 +0000 (0:00:00.300) 0:09:52.605 *********** 2025-06-02 20:06:51.453200 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.453205 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.453211 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.453216 | orchestrator | 2025-06-02 20:06:51.453221 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 20:06:51.453227 | orchestrator | Monday 02 June 2025 20:05:43 +0000 (0:00:01.015) 0:09:53.621 *********** 2025-06-02 20:06:51.453232 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.453237 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.453243 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.453248 | orchestrator | 2025-06-02 20:06:51.453253 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 20:06:51.453258 | orchestrator | Monday 02 June 2025 20:05:44 +0000 (0:00:00.749) 0:09:54.370 *********** 2025-06-02 20:06:51.453264 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.453269 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.453279 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.453284 | orchestrator | 2025-06-02 20:06:51.453289 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 20:06:51.453295 | orchestrator | Monday 02 June 2025 20:05:44 +0000 (0:00:00.307) 0:09:54.677 *********** 2025-06-02 20:06:51.453300 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.453305 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.453310 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.453316 | orchestrator | 2025-06-02 20:06:51.453321 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 20:06:51.453326 | orchestrator | Monday 02 June 2025 20:05:44 +0000 (0:00:00.308) 0:09:54.985 *********** 2025-06-02 20:06:51.453332 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.453337 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.453342 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.453348 | orchestrator | 2025-06-02 20:06:51.453353 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 20:06:51.453358 | orchestrator | Monday 02 June 2025 20:05:45 +0000 (0:00:00.581) 0:09:55.566 *********** 2025-06-02 20:06:51.453364 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.453369 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.453374 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.453380 | orchestrator | 2025-06-02 20:06:51.453385 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 20:06:51.453390 | orchestrator | Monday 02 June 2025 20:05:45 +0000 (0:00:00.358) 0:09:55.925 *********** 2025-06-02 20:06:51.453396 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.453401 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.453406 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.453412 | orchestrator | 2025-06-02 20:06:51.453417 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 20:06:51.453422 | orchestrator | Monday 02 June 2025 20:05:45 +0000 (0:00:00.327) 0:09:56.252 *********** 2025-06-02 20:06:51.453428 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.453433 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.453438 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.453444 | orchestrator | 2025-06-02 20:06:51.453449 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 20:06:51.453454 | orchestrator | Monday 02 June 2025 20:05:46 +0000 (0:00:00.305) 0:09:56.557 *********** 2025-06-02 20:06:51.453465 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.453471 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.453476 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.453481 | orchestrator | 2025-06-02 20:06:51.453486 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 20:06:51.453492 | orchestrator | Monday 02 June 2025 20:05:46 +0000 (0:00:00.554) 0:09:57.112 *********** 2025-06-02 20:06:51.453497 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.453502 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.453508 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.453513 | orchestrator | 2025-06-02 20:06:51.453518 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 20:06:51.453524 | orchestrator | Monday 02 June 2025 20:05:47 +0000 (0:00:00.309) 0:09:57.422 *********** 2025-06-02 20:06:51.453529 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.453534 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.453540 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.453545 | orchestrator | 2025-06-02 20:06:51.453550 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 20:06:51.453555 | orchestrator | Monday 02 June 2025 20:05:47 +0000 (0:00:00.329) 0:09:57.752 *********** 2025-06-02 20:06:51.453561 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.453566 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.453571 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.453577 | orchestrator | 2025-06-02 20:06:51.453582 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-02 20:06:51.453587 | orchestrator | Monday 02 June 2025 20:05:48 +0000 (0:00:00.817) 0:09:58.569 *********** 2025-06-02 20:06:51.453593 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.453598 | orchestrator | 2025-06-02 20:06:51.453603 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 20:06:51.453609 | orchestrator | Monday 02 June 2025 20:05:48 +0000 (0:00:00.497) 0:09:59.066 *********** 2025-06-02 20:06:51.453614 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.453619 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:06:51.453625 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:06:51.453630 | orchestrator | 2025-06-02 20:06:51.453638 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 20:06:51.453644 | orchestrator | Monday 02 June 2025 20:05:51 +0000 (0:00:02.245) 0:10:01.312 *********** 2025-06-02 20:06:51.453649 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:06:51.453655 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 20:06:51.453660 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.453665 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:06:51.453670 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 20:06:51.453676 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.453681 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:06:51.453686 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 20:06:51.453692 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.453697 | orchestrator | 2025-06-02 20:06:51.453702 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-02 20:06:51.453708 | orchestrator | Monday 02 June 2025 20:05:52 +0000 (0:00:01.527) 0:10:02.839 *********** 2025-06-02 20:06:51.453713 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.453718 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.453723 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.453729 | orchestrator | 2025-06-02 20:06:51.453734 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-02 20:06:51.453739 | orchestrator | Monday 02 June 2025 20:05:52 +0000 (0:00:00.310) 0:10:03.150 *********** 2025-06-02 20:06:51.453752 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.453758 | orchestrator | 2025-06-02 20:06:51.453763 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-02 20:06:51.453768 | orchestrator | Monday 02 June 2025 20:05:53 +0000 (0:00:00.518) 0:10:03.669 *********** 2025-06-02 20:06:51.453774 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.453779 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.453785 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.453790 | orchestrator | 2025-06-02 20:06:51.453796 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-02 20:06:51.453801 | orchestrator | Monday 02 June 2025 20:05:54 +0000 (0:00:01.358) 0:10:05.027 *********** 2025-06-02 20:06:51.453806 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.453812 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 20:06:51.453817 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.453822 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 20:06:51.453828 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.453833 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 20:06:51.453838 | orchestrator | 2025-06-02 20:06:51.453844 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 20:06:51.453849 | orchestrator | Monday 02 June 2025 20:05:59 +0000 (0:00:05.118) 0:10:10.146 *********** 2025-06-02 20:06:51.453854 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.453860 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:06:51.453865 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.453870 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:06:51.453875 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:06:51.453894 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:06:51.453899 | orchestrator | 2025-06-02 20:06:51.453905 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 20:06:51.453910 | orchestrator | Monday 02 June 2025 20:06:02 +0000 (0:00:02.766) 0:10:12.913 *********** 2025-06-02 20:06:51.453915 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:06:51.453921 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:06:51.453926 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.453931 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.453937 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:06:51.453942 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.453947 | orchestrator | 2025-06-02 20:06:51.453952 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-02 20:06:51.453958 | orchestrator | Monday 02 June 2025 20:06:03 +0000 (0:00:01.196) 0:10:14.109 *********** 2025-06-02 20:06:51.453963 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-02 20:06:51.453968 | orchestrator | 2025-06-02 20:06:51.453977 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-02 20:06:51.453986 | orchestrator | Monday 02 June 2025 20:06:04 +0000 (0:00:00.210) 0:10:14.320 *********** 2025-06-02 20:06:51.453992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:06:51.453997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:06:51.454003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:06:51.454008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:06:51.454053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:06:51.454061 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.454066 | orchestrator | 2025-06-02 20:06:51.454071 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-02 20:06:51.454077 | orchestrator | Monday 02 June 2025 20:06:04 +0000 (0:00:00.862) 0:10:15.182 *********** 2025-06-02 20:06:51.454085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:06:51.454091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:06:51.454096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:06:51.454102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:06:51.454110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 20:06:51.454120 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.454126 | orchestrator | 2025-06-02 20:06:51.454131 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-02 20:06:51.454136 | orchestrator | Monday 02 June 2025 20:06:06 +0000 (0:00:01.634) 0:10:16.816 *********** 2025-06-02 20:06:51.454142 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 20:06:51.454147 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 20:06:51.454153 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 20:06:51.454158 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 20:06:51.454163 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 20:06:51.454169 | orchestrator | 2025-06-02 20:06:51.454174 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-02 20:06:51.454179 | orchestrator | Monday 02 June 2025 20:06:38 +0000 (0:00:31.806) 0:10:48.623 *********** 2025-06-02 20:06:51.454185 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.454190 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.454195 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.454201 | orchestrator | 2025-06-02 20:06:51.454206 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-02 20:06:51.454216 | orchestrator | Monday 02 June 2025 20:06:38 +0000 (0:00:00.343) 0:10:48.966 *********** 2025-06-02 20:06:51.454221 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.454227 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.454232 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.454237 | orchestrator | 2025-06-02 20:06:51.454242 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-02 20:06:51.454248 | orchestrator | Monday 02 June 2025 20:06:38 +0000 (0:00:00.311) 0:10:49.278 *********** 2025-06-02 20:06:51.454253 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.454258 | orchestrator | 2025-06-02 20:06:51.454264 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-02 20:06:51.454269 | orchestrator | Monday 02 June 2025 20:06:39 +0000 (0:00:00.756) 0:10:50.034 *********** 2025-06-02 20:06:51.454274 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.454280 | orchestrator | 2025-06-02 20:06:51.454285 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-02 20:06:51.454290 | orchestrator | Monday 02 June 2025 20:06:40 +0000 (0:00:00.531) 0:10:50.566 *********** 2025-06-02 20:06:51.454295 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.454301 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.454306 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.454311 | orchestrator | 2025-06-02 20:06:51.454320 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-02 20:06:51.454325 | orchestrator | Monday 02 June 2025 20:06:41 +0000 (0:00:01.347) 0:10:51.913 *********** 2025-06-02 20:06:51.454331 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.454336 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.454341 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.454347 | orchestrator | 2025-06-02 20:06:51.454352 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-02 20:06:51.454357 | orchestrator | Monday 02 June 2025 20:06:43 +0000 (0:00:01.456) 0:10:53.370 *********** 2025-06-02 20:06:51.454363 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:06:51.454368 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:06:51.454373 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:06:51.454378 | orchestrator | 2025-06-02 20:06:51.454384 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-02 20:06:51.454389 | orchestrator | Monday 02 June 2025 20:06:44 +0000 (0:00:01.777) 0:10:55.147 *********** 2025-06-02 20:06:51.454394 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.454400 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.454408 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 20:06:51.454414 | orchestrator | 2025-06-02 20:06:51.454419 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 20:06:51.454424 | orchestrator | Monday 02 June 2025 20:06:47 +0000 (0:00:02.539) 0:10:57.687 *********** 2025-06-02 20:06:51.454429 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.454435 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.454440 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.454445 | orchestrator | 2025-06-02 20:06:51.454451 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 20:06:51.454456 | orchestrator | Monday 02 June 2025 20:06:47 +0000 (0:00:00.373) 0:10:58.060 *********** 2025-06-02 20:06:51.454461 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:06:51.454469 | orchestrator | 2025-06-02 20:06:51.454485 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 20:06:51.454493 | orchestrator | Monday 02 June 2025 20:06:48 +0000 (0:00:00.503) 0:10:58.564 *********** 2025-06-02 20:06:51.454501 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.454508 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.454517 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.454527 | orchestrator | 2025-06-02 20:06:51.454535 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 20:06:51.454543 | orchestrator | Monday 02 June 2025 20:06:48 +0000 (0:00:00.562) 0:10:59.127 *********** 2025-06-02 20:06:51.454551 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.454559 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:06:51.454568 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:06:51.454576 | orchestrator | 2025-06-02 20:06:51.454586 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 20:06:51.454592 | orchestrator | Monday 02 June 2025 20:06:49 +0000 (0:00:00.346) 0:10:59.474 *********** 2025-06-02 20:06:51.454597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:06:51.454603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:06:51.454608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:06:51.454613 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:06:51.454619 | orchestrator | 2025-06-02 20:06:51.454624 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 20:06:51.454629 | orchestrator | Monday 02 June 2025 20:06:49 +0000 (0:00:00.610) 0:11:00.084 *********** 2025-06-02 20:06:51.454635 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:06:51.454640 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:06:51.454645 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:06:51.454651 | orchestrator | 2025-06-02 20:06:51.454656 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:06:51.454661 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-02 20:06:51.454667 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-02 20:06:51.454673 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-02 20:06:51.454678 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-02 20:06:51.454684 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-02 20:06:51.454689 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-02 20:06:51.454694 | orchestrator | 2025-06-02 20:06:51.454700 | orchestrator | 2025-06-02 20:06:51.454705 | orchestrator | 2025-06-02 20:06:51.454710 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:06:51.454716 | orchestrator | Monday 02 June 2025 20:06:50 +0000 (0:00:00.234) 0:11:00.318 *********** 2025-06-02 20:06:51.454725 | orchestrator | =============================================================================== 2025-06-02 20:06:51.454731 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 60.83s 2025-06-02 20:06:51.454736 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.69s 2025-06-02 20:06:51.454741 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.81s 2025-06-02 20:06:51.454750 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.48s 2025-06-02 20:06:51.454758 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.07s 2025-06-02 20:06:51.454769 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.74s 2025-06-02 20:06:51.454775 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.03s 2025-06-02 20:06:51.454780 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.16s 2025-06-02 20:06:51.454785 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 11.14s 2025-06-02 20:06:51.454791 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.96s 2025-06-02 20:06:51.454796 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.56s 2025-06-02 20:06:51.454801 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.18s 2025-06-02 20:06:51.454810 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.12s 2025-06-02 20:06:51.454816 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.86s 2025-06-02 20:06:51.454821 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.95s 2025-06-02 20:06:51.454826 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.94s 2025-06-02 20:06:51.454831 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.89s 2025-06-02 20:06:51.454837 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.37s 2025-06-02 20:06:51.454842 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.30s 2025-06-02 20:06:51.454847 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.29s 2025-06-02 20:06:51.454853 | orchestrator | 2025-06-02 20:06:51 | INFO  | Task 4efb0510-077d-4334-9816-4b4ce82f3dee is in state SUCCESS 2025-06-02 20:06:51.454858 | orchestrator | 2025-06-02 20:06:51 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:51.454864 | orchestrator | 2025-06-02 20:06:51 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:51.454869 | orchestrator | 2025-06-02 20:06:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:54.479243 | orchestrator | 2025-06-02 20:06:54 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:54.480930 | orchestrator | 2025-06-02 20:06:54 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:06:54.482580 | orchestrator | 2025-06-02 20:06:54 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:54.482650 | orchestrator | 2025-06-02 20:06:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:06:57.535566 | orchestrator | 2025-06-02 20:06:57 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:06:57.535694 | orchestrator | 2025-06-02 20:06:57 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:06:57.537201 | orchestrator | 2025-06-02 20:06:57 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:06:57.537230 | orchestrator | 2025-06-02 20:06:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:00.584387 | orchestrator | 2025-06-02 20:07:00 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:00.585690 | orchestrator | 2025-06-02 20:07:00 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:00.587464 | orchestrator | 2025-06-02 20:07:00 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:07:00.587518 | orchestrator | 2025-06-02 20:07:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:03.636313 | orchestrator | 2025-06-02 20:07:03 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:03.639191 | orchestrator | 2025-06-02 20:07:03 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:03.640218 | orchestrator | 2025-06-02 20:07:03 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:07:03.640710 | orchestrator | 2025-06-02 20:07:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:06.681761 | orchestrator | 2025-06-02 20:07:06 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:06.683121 | orchestrator | 2025-06-02 20:07:06 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:06.685981 | orchestrator | 2025-06-02 20:07:06 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:07:06.686082 | orchestrator | 2025-06-02 20:07:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:09.725126 | orchestrator | 2025-06-02 20:07:09 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:09.727164 | orchestrator | 2025-06-02 20:07:09 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:09.727375 | orchestrator | 2025-06-02 20:07:09 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:07:09.727390 | orchestrator | 2025-06-02 20:07:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:12.778103 | orchestrator | 2025-06-02 20:07:12 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:12.778222 | orchestrator | 2025-06-02 20:07:12 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:12.778624 | orchestrator | 2025-06-02 20:07:12 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:07:12.778654 | orchestrator | 2025-06-02 20:07:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:15.830324 | orchestrator | 2025-06-02 20:07:15 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:15.831229 | orchestrator | 2025-06-02 20:07:15 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:15.835704 | orchestrator | 2025-06-02 20:07:15 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:07:15.835823 | orchestrator | 2025-06-02 20:07:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:18.878932 | orchestrator | 2025-06-02 20:07:18 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:18.879017 | orchestrator | 2025-06-02 20:07:18 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:18.879025 | orchestrator | 2025-06-02 20:07:18 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:07:18.879031 | orchestrator | 2025-06-02 20:07:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:21.933754 | orchestrator | 2025-06-02 20:07:21 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:21.938102 | orchestrator | 2025-06-02 20:07:21 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:21.940678 | orchestrator | 2025-06-02 20:07:21 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:07:21.940724 | orchestrator | 2025-06-02 20:07:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:24.991395 | orchestrator | 2025-06-02 20:07:24 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:24.993394 | orchestrator | 2025-06-02 20:07:24 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:24.995151 | orchestrator | 2025-06-02 20:07:24 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:07:24.995172 | orchestrator | 2025-06-02 20:07:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:28.043148 | orchestrator | 2025-06-02 20:07:28 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:28.044644 | orchestrator | 2025-06-02 20:07:28 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:28.047185 | orchestrator | 2025-06-02 20:07:28 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state STARTED 2025-06-02 20:07:28.047428 | orchestrator | 2025-06-02 20:07:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:31.105807 | orchestrator | 2025-06-02 20:07:31 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:31.107128 | orchestrator | 2025-06-02 20:07:31 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:31.109337 | orchestrator | 2025-06-02 20:07:31 | INFO  | Task 1e7de8ff-5f59-43e3-b68b-0dfec31d47b4 is in state SUCCESS 2025-06-02 20:07:31.110781 | orchestrator | 2025-06-02 20:07:31.110810 | orchestrator | 2025-06-02 20:07:31.110830 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:07:31.110870 | orchestrator | 2025-06-02 20:07:31.110879 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:07:31.110888 | orchestrator | Monday 02 June 2025 20:04:38 +0000 (0:00:00.260) 0:00:00.260 *********** 2025-06-02 20:07:31.110896 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:31.110906 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:31.110914 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:31.110922 | orchestrator | 2025-06-02 20:07:31.110930 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:07:31.110938 | orchestrator | Monday 02 June 2025 20:04:38 +0000 (0:00:00.339) 0:00:00.600 *********** 2025-06-02 20:07:31.110946 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-02 20:07:31.110955 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-02 20:07:31.110963 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-02 20:07:31.110970 | orchestrator | 2025-06-02 20:07:31.110978 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-02 20:07:31.110986 | orchestrator | 2025-06-02 20:07:31.110994 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 20:07:31.111001 | orchestrator | Monday 02 June 2025 20:04:39 +0000 (0:00:00.611) 0:00:01.211 *********** 2025-06-02 20:07:31.111009 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:31.111017 | orchestrator | 2025-06-02 20:07:31.111025 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-02 20:07:31.111090 | orchestrator | Monday 02 June 2025 20:04:40 +0000 (0:00:00.508) 0:00:01.720 *********** 2025-06-02 20:07:31.111113 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 20:07:31.111121 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 20:07:31.111129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 20:07:31.111137 | orchestrator | 2025-06-02 20:07:31.111145 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-02 20:07:31.111153 | orchestrator | Monday 02 June 2025 20:04:40 +0000 (0:00:00.639) 0:00:02.360 *********** 2025-06-02 20:07:31.111165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.111194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.111307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.111323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.111339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.111356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.111365 | orchestrator | 2025-06-02 20:07:31.111373 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 20:07:31.111381 | orchestrator | Monday 02 June 2025 20:04:42 +0000 (0:00:01.694) 0:00:04.054 *********** 2025-06-02 20:07:31.111393 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:31.111400 | orchestrator | 2025-06-02 20:07:31.111408 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-02 20:07:31.111416 | orchestrator | Monday 02 June 2025 20:04:42 +0000 (0:00:00.526) 0:00:04.580 *********** 2025-06-02 20:07:31.111431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.111444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.111453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.111467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.111484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.111497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.111510 | orchestrator | 2025-06-02 20:07:31.111519 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-02 20:07:31.111527 | orchestrator | Monday 02 June 2025 20:04:46 +0000 (0:00:03.074) 0:00:07.654 *********** 2025-06-02 20:07:31.111535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:07:31.111544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:07:31.111553 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:31.111567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:07:31.111580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:07:31.111593 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:31.111602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:07:31.111611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:07:31.111619 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:31.111627 | orchestrator | 2025-06-02 20:07:31.111635 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-02 20:07:31.111643 | orchestrator | Monday 02 June 2025 20:04:47 +0000 (0:00:01.114) 0:00:08.769 *********** 2025-06-02 20:07:31.111656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:07:31.111669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:07:31.111682 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:31.111690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:07:31.111699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:07:31.111707 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:31.111720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 20:07:31.111729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 20:07:31.111742 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:31.111750 | orchestrator | 2025-06-02 20:07:31.111758 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-02 20:07:31.111770 | orchestrator | Monday 02 June 2025 20:04:47 +0000 (0:00:00.578) 0:00:09.347 *********** 2025-06-02 20:07:31.111778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.111787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.111795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.111809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.111856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.111868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.111876 | orchestrator | 2025-06-02 20:07:31.111885 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-02 20:07:31.111892 | orchestrator | Monday 02 June 2025 20:04:50 +0000 (0:00:02.350) 0:00:11.697 *********** 2025-06-02 20:07:31.111901 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:31.111909 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:31.111916 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:31.111924 | orchestrator | 2025-06-02 20:07:31.111932 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-02 20:07:31.111940 | orchestrator | Monday 02 June 2025 20:04:52 +0000 (0:00:02.522) 0:00:14.220 *********** 2025-06-02 20:07:31.111949 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:31.111959 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:31.111968 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:31.111977 | orchestrator | 2025-06-02 20:07:31.111987 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-02 20:07:31.111997 | orchestrator | Monday 02 June 2025 20:04:54 +0000 (0:00:01.575) 0:00:15.795 *********** 2025-06-02 20:07:31.112013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/et2025-06-02 20:07:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:31.112032 | orchestrator | c/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.112047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.112056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 20:07:31.112064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.112079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.112098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 20:07:31.112107 | orchestrator | 2025-06-02 20:07:31.112115 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 20:07:31.112123 | orchestrator | Monday 02 June 2025 20:04:56 +0000 (0:00:01.942) 0:00:17.737 *********** 2025-06-02 20:07:31.112130 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:31.112138 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:31.112146 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:31.112153 | orchestrator | 2025-06-02 20:07:31.112161 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 20:07:31.112169 | orchestrator | Monday 02 June 2025 20:04:56 +0000 (0:00:00.259) 0:00:17.996 *********** 2025-06-02 20:07:31.112177 | orchestrator | 2025-06-02 20:07:31.112185 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 20:07:31.112192 | orchestrator | Monday 02 June 2025 20:04:56 +0000 (0:00:00.059) 0:00:18.056 *********** 2025-06-02 20:07:31.112200 | orchestrator | 2025-06-02 20:07:31.112208 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 20:07:31.112216 | orchestrator | Monday 02 June 2025 20:04:56 +0000 (0:00:00.059) 0:00:18.115 *********** 2025-06-02 20:07:31.112224 | orchestrator | 2025-06-02 20:07:31.112231 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-02 20:07:31.112239 | orchestrator | Monday 02 June 2025 20:04:56 +0000 (0:00:00.179) 0:00:18.295 *********** 2025-06-02 20:07:31.112247 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:31.112255 | orchestrator | 2025-06-02 20:07:31.112262 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-02 20:07:31.112270 | orchestrator | Monday 02 June 2025 20:04:56 +0000 (0:00:00.163) 0:00:18.459 *********** 2025-06-02 20:07:31.112278 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:31.112286 | orchestrator | 2025-06-02 20:07:31.112293 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-02 20:07:31.112301 | orchestrator | Monday 02 June 2025 20:04:56 +0000 (0:00:00.157) 0:00:18.617 *********** 2025-06-02 20:07:31.112309 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:31.112317 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:31.112324 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:31.112332 | orchestrator | 2025-06-02 20:07:31.112340 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-02 20:07:31.112354 | orchestrator | Monday 02 June 2025 20:05:59 +0000 (0:01:02.088) 0:01:20.705 *********** 2025-06-02 20:07:31.112362 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:31.112369 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:31.112377 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:31.112385 | orchestrator | 2025-06-02 20:07:31.112393 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 20:07:31.112400 | orchestrator | Monday 02 June 2025 20:07:19 +0000 (0:01:20.087) 0:02:40.793 *********** 2025-06-02 20:07:31.112408 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:31.112416 | orchestrator | 2025-06-02 20:07:31.112424 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-02 20:07:31.112432 | orchestrator | Monday 02 June 2025 20:07:19 +0000 (0:00:00.724) 0:02:41.517 *********** 2025-06-02 20:07:31.112440 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:31.112447 | orchestrator | 2025-06-02 20:07:31.112455 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-02 20:07:31.112463 | orchestrator | Monday 02 June 2025 20:07:22 +0000 (0:00:02.355) 0:02:43.873 *********** 2025-06-02 20:07:31.112471 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:31.112478 | orchestrator | 2025-06-02 20:07:31.112486 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-02 20:07:31.112494 | orchestrator | Monday 02 June 2025 20:07:24 +0000 (0:00:02.279) 0:02:46.152 *********** 2025-06-02 20:07:31.112502 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:31.112509 | orchestrator | 2025-06-02 20:07:31.112517 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-02 20:07:31.112529 | orchestrator | Monday 02 June 2025 20:07:27 +0000 (0:00:02.883) 0:02:49.036 *********** 2025-06-02 20:07:31.112537 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:31.112545 | orchestrator | 2025-06-02 20:07:31.112553 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:07:31.112562 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:07:31.112572 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 20:07:31.112580 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 20:07:31.112588 | orchestrator | 2025-06-02 20:07:31.112596 | orchestrator | 2025-06-02 20:07:31.112604 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:07:31.112611 | orchestrator | Monday 02 June 2025 20:07:29 +0000 (0:00:02.332) 0:02:51.369 *********** 2025-06-02 20:07:31.112619 | orchestrator | =============================================================================== 2025-06-02 20:07:31.112627 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 80.09s 2025-06-02 20:07:31.112635 | orchestrator | opensearch : Restart opensearch container ------------------------------ 62.09s 2025-06-02 20:07:31.112643 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.07s 2025-06-02 20:07:31.112650 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.88s 2025-06-02 20:07:31.112658 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.52s 2025-06-02 20:07:31.112670 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.36s 2025-06-02 20:07:31.112678 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.35s 2025-06-02 20:07:31.112685 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.33s 2025-06-02 20:07:31.112693 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.28s 2025-06-02 20:07:31.112701 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.94s 2025-06-02 20:07:31.112714 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.69s 2025-06-02 20:07:31.112721 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.58s 2025-06-02 20:07:31.112729 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.11s 2025-06-02 20:07:31.112737 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.72s 2025-06-02 20:07:31.112745 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2025-06-02 20:07:31.112752 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-06-02 20:07:31.112760 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.58s 2025-06-02 20:07:31.112768 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-06-02 20:07:31.112776 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-06-02 20:07:31.112783 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-06-02 20:07:34.157507 | orchestrator | 2025-06-02 20:07:34 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:34.158637 | orchestrator | 2025-06-02 20:07:34 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:34.158694 | orchestrator | 2025-06-02 20:07:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:37.206810 | orchestrator | 2025-06-02 20:07:37 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:37.208893 | orchestrator | 2025-06-02 20:07:37 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:37.208945 | orchestrator | 2025-06-02 20:07:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:40.253574 | orchestrator | 2025-06-02 20:07:40 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:40.257963 | orchestrator | 2025-06-02 20:07:40 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:40.258126 | orchestrator | 2025-06-02 20:07:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:43.299943 | orchestrator | 2025-06-02 20:07:43 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:43.301257 | orchestrator | 2025-06-02 20:07:43 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:43.301374 | orchestrator | 2025-06-02 20:07:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:46.343235 | orchestrator | 2025-06-02 20:07:46 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state STARTED 2025-06-02 20:07:46.345013 | orchestrator | 2025-06-02 20:07:46 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:46.345113 | orchestrator | 2025-06-02 20:07:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:49.403402 | orchestrator | 2025-06-02 20:07:49 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:07:49.404888 | orchestrator | 2025-06-02 20:07:49 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:07:49.409275 | orchestrator | 2025-06-02 20:07:49 | INFO  | Task 3281642f-6e92-41fb-b68d-cd78064f91af is in state SUCCESS 2025-06-02 20:07:49.411082 | orchestrator | 2025-06-02 20:07:49.411138 | orchestrator | 2025-06-02 20:07:49.411151 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-02 20:07:49.411163 | orchestrator | 2025-06-02 20:07:49.411174 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 20:07:49.411185 | orchestrator | Monday 02 June 2025 20:04:38 +0000 (0:00:00.098) 0:00:00.098 *********** 2025-06-02 20:07:49.411222 | orchestrator | ok: [localhost] => { 2025-06-02 20:07:49.411235 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-02 20:07:49.411247 | orchestrator | } 2025-06-02 20:07:49.411258 | orchestrator | 2025-06-02 20:07:49.411269 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-02 20:07:49.411279 | orchestrator | Monday 02 June 2025 20:04:38 +0000 (0:00:00.050) 0:00:00.149 *********** 2025-06-02 20:07:49.411290 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-02 20:07:49.411303 | orchestrator | ...ignoring 2025-06-02 20:07:49.411316 | orchestrator | 2025-06-02 20:07:49.411374 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-02 20:07:49.411417 | orchestrator | Monday 02 June 2025 20:04:41 +0000 (0:00:03.000) 0:00:03.149 *********** 2025-06-02 20:07:49.411430 | orchestrator | skipping: [localhost] 2025-06-02 20:07:49.411441 | orchestrator | 2025-06-02 20:07:49.411452 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-02 20:07:49.411463 | orchestrator | Monday 02 June 2025 20:04:41 +0000 (0:00:00.070) 0:00:03.219 *********** 2025-06-02 20:07:49.411473 | orchestrator | ok: [localhost] 2025-06-02 20:07:49.411484 | orchestrator | 2025-06-02 20:07:49.411494 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:07:49.411505 | orchestrator | 2025-06-02 20:07:49.411515 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:07:49.411526 | orchestrator | Monday 02 June 2025 20:04:41 +0000 (0:00:00.155) 0:00:03.375 *********** 2025-06-02 20:07:49.411536 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:49.411547 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:49.411557 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:49.411568 | orchestrator | 2025-06-02 20:07:49.411579 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:07:49.411589 | orchestrator | Monday 02 June 2025 20:04:41 +0000 (0:00:00.300) 0:00:03.675 *********** 2025-06-02 20:07:49.411599 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 20:07:49.411610 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 20:07:49.411621 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 20:07:49.411631 | orchestrator | 2025-06-02 20:07:49.411642 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 20:07:49.411652 | orchestrator | 2025-06-02 20:07:49.411662 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 20:07:49.411673 | orchestrator | Monday 02 June 2025 20:04:42 +0000 (0:00:00.665) 0:00:04.340 *********** 2025-06-02 20:07:49.411683 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:07:49.411694 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 20:07:49.411705 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 20:07:49.411715 | orchestrator | 2025-06-02 20:07:49.411726 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 20:07:49.411736 | orchestrator | Monday 02 June 2025 20:04:43 +0000 (0:00:00.346) 0:00:04.687 *********** 2025-06-02 20:07:49.411747 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:49.411759 | orchestrator | 2025-06-02 20:07:49.411770 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-02 20:07:49.411780 | orchestrator | Monday 02 June 2025 20:04:43 +0000 (0:00:00.578) 0:00:05.266 *********** 2025-06-02 20:07:49.411844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:07:49.411879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:07:49.411893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:07:49.411912 | orchestrator | 2025-06-02 20:07:49.411931 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-02 20:07:49.411942 | orchestrator | Monday 02 June 2025 20:04:46 +0000 (0:00:03.108) 0:00:08.374 *********** 2025-06-02 20:07:49.411953 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.411964 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.411974 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.411985 | orchestrator | 2025-06-02 20:07:49.411995 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-02 20:07:49.412006 | orchestrator | Monday 02 June 2025 20:04:47 +0000 (0:00:00.575) 0:00:08.950 *********** 2025-06-02 20:07:49.412016 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.412027 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.412037 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.412048 | orchestrator | 2025-06-02 20:07:49.412058 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-02 20:07:49.412069 | orchestrator | Monday 02 June 2025 20:04:48 +0000 (0:00:01.295) 0:00:10.246 *********** 2025-06-02 20:07:49.412087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:07:49.412118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:07:49.412136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:07:49.412148 | orchestrator | 2025-06-02 20:07:49.412159 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-02 20:07:49.412170 | orchestrator | Monday 02 June 2025 20:04:51 +0000 (0:00:03.277) 0:00:13.524 *********** 2025-06-02 20:07:49.412181 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.412198 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.412209 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.412220 | orchestrator | 2025-06-02 20:07:49.412230 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-02 20:07:49.412241 | orchestrator | Monday 02 June 2025 20:04:52 +0000 (0:00:01.085) 0:00:14.609 *********** 2025-06-02 20:07:49.412252 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.412262 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:49.412273 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:49.412283 | orchestrator | 2025-06-02 20:07:49.412294 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 20:07:49.412304 | orchestrator | Monday 02 June 2025 20:04:56 +0000 (0:00:03.876) 0:00:18.486 *********** 2025-06-02 20:07:49.412315 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:49.412326 | orchestrator | 2025-06-02 20:07:49.412336 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 20:07:49.412347 | orchestrator | Monday 02 June 2025 20:04:57 +0000 (0:00:00.448) 0:00:18.934 *********** 2025-06-02 20:07:49.412367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:49.412385 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:49.412397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:49.412415 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.412434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:49.412447 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.412458 | orchestrator | 2025-06-02 20:07:49.412468 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 20:07:49.412479 | orchestrator | Monday 02 June 2025 20:05:00 +0000 (0:00:03.322) 0:00:22.256 *********** 2025-06-02 20:07:49.412495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:49.412513 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.412531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:49.412543 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:49.412560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:49.412584 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.412595 | orchestrator | 2025-06-02 20:07:49.412606 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 20:07:49.412616 | orchestrator | Monday 02 June 2025 20:05:03 +0000 (0:00:02.761) 0:00:25.018 *********** 2025-06-02 20:07:49.412634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:49.412646 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.412663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:49.412684 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.412696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 20:07:49.412707 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:49.412718 | orchestrator | 2025-06-02 20:07:49.412729 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-02 20:07:49.412739 | orchestrator | Monday 02 June 2025 20:05:05 +0000 (0:00:02.617) 0:00:27.635 *********** 2025-06-02 20:07:49.412764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:07:49.412786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:07:49.412851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 20:07:49.412882 | orchestrator | 2025-06-02 20:07:49.412894 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-02 20:07:49.412905 | orchestrator | Monday 02 June 2025 20:05:09 +0000 (0:00:03.506) 0:00:31.142 *********** 2025-06-02 20:07:49.412915 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.412926 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:49.412937 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:49.412947 | orchestrator | 2025-06-02 20:07:49.412958 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-02 20:07:49.412968 | orchestrator | Monday 02 June 2025 20:05:10 +0000 (0:00:01.099) 0:00:32.241 *********** 2025-06-02 20:07:49.412979 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:49.412990 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:49.413001 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:49.413011 | orchestrator | 2025-06-02 20:07:49.413022 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-02 20:07:49.413033 | orchestrator | Monday 02 June 2025 20:05:10 +0000 (0:00:00.421) 0:00:32.663 *********** 2025-06-02 20:07:49.413043 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:49.413054 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:49.413064 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:49.413075 | orchestrator | 2025-06-02 20:07:49.413085 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-02 20:07:49.413096 | orchestrator | Monday 02 June 2025 20:05:11 +0000 (0:00:00.426) 0:00:33.089 *********** 2025-06-02 20:07:49.413107 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-02 20:07:49.413118 | orchestrator | ...ignoring 2025-06-02 20:07:49.413129 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-02 20:07:49.413140 | orchestrator | ...ignoring 2025-06-02 20:07:49.413150 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-02 20:07:49.413161 | orchestrator | ...ignoring 2025-06-02 20:07:49.413171 | orchestrator | 2025-06-02 20:07:49.413182 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-02 20:07:49.413193 | orchestrator | Monday 02 June 2025 20:05:22 +0000 (0:00:11.147) 0:00:44.236 *********** 2025-06-02 20:07:49.413203 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:49.413214 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:49.413224 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:49.413235 | orchestrator | 2025-06-02 20:07:49.413245 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-02 20:07:49.413256 | orchestrator | Monday 02 June 2025 20:05:23 +0000 (0:00:00.607) 0:00:44.844 *********** 2025-06-02 20:07:49.413267 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:49.413277 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.413288 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.413298 | orchestrator | 2025-06-02 20:07:49.413309 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-02 20:07:49.413320 | orchestrator | Monday 02 June 2025 20:05:23 +0000 (0:00:00.402) 0:00:45.246 *********** 2025-06-02 20:07:49.413330 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:49.413341 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.413351 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.413362 | orchestrator | 2025-06-02 20:07:49.413372 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-02 20:07:49.413389 | orchestrator | Monday 02 June 2025 20:05:23 +0000 (0:00:00.394) 0:00:45.641 *********** 2025-06-02 20:07:49.413400 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:49.413410 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.413421 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.413431 | orchestrator | 2025-06-02 20:07:49.413442 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-02 20:07:49.413459 | orchestrator | Monday 02 June 2025 20:05:24 +0000 (0:00:00.394) 0:00:46.036 *********** 2025-06-02 20:07:49.413471 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:49.413481 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:49.413492 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:49.413502 | orchestrator | 2025-06-02 20:07:49.413513 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-02 20:07:49.413523 | orchestrator | Monday 02 June 2025 20:05:24 +0000 (0:00:00.590) 0:00:46.627 *********** 2025-06-02 20:07:49.413534 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:49.413544 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.413555 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.413565 | orchestrator | 2025-06-02 20:07:49.413576 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 20:07:49.413586 | orchestrator | Monday 02 June 2025 20:05:25 +0000 (0:00:00.439) 0:00:47.067 *********** 2025-06-02 20:07:49.413597 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.413607 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.413618 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-02 20:07:49.413628 | orchestrator | 2025-06-02 20:07:49.413639 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-02 20:07:49.413654 | orchestrator | Monday 02 June 2025 20:05:25 +0000 (0:00:00.376) 0:00:47.443 *********** 2025-06-02 20:07:49.413665 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.413675 | orchestrator | 2025-06-02 20:07:49.413686 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-02 20:07:49.413696 | orchestrator | Monday 02 June 2025 20:05:35 +0000 (0:00:10.069) 0:00:57.513 *********** 2025-06-02 20:07:49.413707 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:49.413717 | orchestrator | 2025-06-02 20:07:49.413728 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 20:07:49.413738 | orchestrator | Monday 02 June 2025 20:05:35 +0000 (0:00:00.147) 0:00:57.660 *********** 2025-06-02 20:07:49.413749 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:49.413759 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.413770 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.413781 | orchestrator | 2025-06-02 20:07:49.413791 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-02 20:07:49.413802 | orchestrator | Monday 02 June 2025 20:05:37 +0000 (0:00:01.042) 0:00:58.703 *********** 2025-06-02 20:07:49.413901 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.413915 | orchestrator | 2025-06-02 20:07:49.413925 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-02 20:07:49.413936 | orchestrator | Monday 02 June 2025 20:05:44 +0000 (0:00:07.805) 0:01:06.508 *********** 2025-06-02 20:07:49.413947 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:49.413957 | orchestrator | 2025-06-02 20:07:49.413968 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-02 20:07:49.413979 | orchestrator | Monday 02 June 2025 20:05:46 +0000 (0:00:01.643) 0:01:08.152 *********** 2025-06-02 20:07:49.413989 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:49.414000 | orchestrator | 2025-06-02 20:07:49.414010 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-02 20:07:49.414074 | orchestrator | Monday 02 June 2025 20:05:48 +0000 (0:00:02.466) 0:01:10.618 *********** 2025-06-02 20:07:49.414085 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.414104 | orchestrator | 2025-06-02 20:07:49.414115 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-02 20:07:49.414126 | orchestrator | Monday 02 June 2025 20:05:49 +0000 (0:00:00.137) 0:01:10.755 *********** 2025-06-02 20:07:49.414136 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:49.414147 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.414157 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.414166 | orchestrator | 2025-06-02 20:07:49.414176 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-02 20:07:49.414185 | orchestrator | Monday 02 June 2025 20:05:49 +0000 (0:00:00.501) 0:01:11.257 *********** 2025-06-02 20:07:49.414194 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:49.414204 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 20:07:49.414213 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:49.414222 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:49.414232 | orchestrator | 2025-06-02 20:07:49.414241 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 20:07:49.414250 | orchestrator | skipping: no hosts matched 2025-06-02 20:07:49.414260 | orchestrator | 2025-06-02 20:07:49.414269 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 20:07:49.414279 | orchestrator | 2025-06-02 20:07:49.414288 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 20:07:49.414297 | orchestrator | Monday 02 June 2025 20:05:49 +0000 (0:00:00.321) 0:01:11.578 *********** 2025-06-02 20:07:49.414307 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:07:49.414316 | orchestrator | 2025-06-02 20:07:49.414325 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 20:07:49.414335 | orchestrator | Monday 02 June 2025 20:06:08 +0000 (0:00:19.062) 0:01:30.641 *********** 2025-06-02 20:07:49.414344 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:49.414354 | orchestrator | 2025-06-02 20:07:49.414363 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 20:07:49.414372 | orchestrator | Monday 02 June 2025 20:06:29 +0000 (0:00:20.694) 0:01:51.335 *********** 2025-06-02 20:07:49.414382 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:49.414391 | orchestrator | 2025-06-02 20:07:49.414400 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 20:07:49.414410 | orchestrator | 2025-06-02 20:07:49.414419 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 20:07:49.414429 | orchestrator | Monday 02 June 2025 20:06:32 +0000 (0:00:02.400) 0:01:53.736 *********** 2025-06-02 20:07:49.414438 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:07:49.414447 | orchestrator | 2025-06-02 20:07:49.414457 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 20:07:49.414474 | orchestrator | Monday 02 June 2025 20:06:51 +0000 (0:00:19.253) 0:02:12.990 *********** 2025-06-02 20:07:49.414484 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:49.414493 | orchestrator | 2025-06-02 20:07:49.414503 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 20:07:49.414512 | orchestrator | Monday 02 June 2025 20:07:11 +0000 (0:00:20.674) 0:02:33.664 *********** 2025-06-02 20:07:49.414521 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:49.414531 | orchestrator | 2025-06-02 20:07:49.414540 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 20:07:49.414549 | orchestrator | 2025-06-02 20:07:49.414559 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 20:07:49.414568 | orchestrator | Monday 02 June 2025 20:07:14 +0000 (0:00:02.765) 0:02:36.430 *********** 2025-06-02 20:07:49.414578 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.414587 | orchestrator | 2025-06-02 20:07:49.414596 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 20:07:49.414606 | orchestrator | Monday 02 June 2025 20:07:31 +0000 (0:00:16.934) 0:02:53.365 *********** 2025-06-02 20:07:49.414621 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:49.414631 | orchestrator | 2025-06-02 20:07:49.414640 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 20:07:49.414654 | orchestrator | Monday 02 June 2025 20:07:32 +0000 (0:00:00.615) 0:02:53.981 *********** 2025-06-02 20:07:49.414664 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:49.414674 | orchestrator | 2025-06-02 20:07:49.414683 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 20:07:49.414692 | orchestrator | 2025-06-02 20:07:49.414702 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 20:07:49.414711 | orchestrator | Monday 02 June 2025 20:07:34 +0000 (0:00:02.405) 0:02:56.386 *********** 2025-06-02 20:07:49.414721 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:07:49.414730 | orchestrator | 2025-06-02 20:07:49.414739 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-02 20:07:49.414749 | orchestrator | Monday 02 June 2025 20:07:35 +0000 (0:00:00.540) 0:02:56.927 *********** 2025-06-02 20:07:49.414758 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.414767 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.414777 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.414786 | orchestrator | 2025-06-02 20:07:49.414796 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-02 20:07:49.414805 | orchestrator | Monday 02 June 2025 20:07:37 +0000 (0:00:02.671) 0:02:59.599 *********** 2025-06-02 20:07:49.414845 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.414855 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.414864 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.414873 | orchestrator | 2025-06-02 20:07:49.414883 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-02 20:07:49.414892 | orchestrator | Monday 02 June 2025 20:07:40 +0000 (0:00:02.198) 0:03:01.797 *********** 2025-06-02 20:07:49.414901 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.414911 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.414920 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.414929 | orchestrator | 2025-06-02 20:07:49.414939 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-02 20:07:49.414948 | orchestrator | Monday 02 June 2025 20:07:42 +0000 (0:00:02.261) 0:03:04.058 *********** 2025-06-02 20:07:49.414957 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.414966 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.414976 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:07:49.414985 | orchestrator | 2025-06-02 20:07:49.414994 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-02 20:07:49.415004 | orchestrator | Monday 02 June 2025 20:07:44 +0000 (0:00:02.150) 0:03:06.209 *********** 2025-06-02 20:07:49.415013 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:07:49.415022 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:07:49.415032 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:07:49.415041 | orchestrator | 2025-06-02 20:07:49.415050 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 20:07:49.415060 | orchestrator | Monday 02 June 2025 20:07:47 +0000 (0:00:02.852) 0:03:09.061 *********** 2025-06-02 20:07:49.415069 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:07:49.415078 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:07:49.415087 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:07:49.415097 | orchestrator | 2025-06-02 20:07:49.415106 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:07:49.415115 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 20:07:49.415125 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-02 20:07:49.415142 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 20:07:49.415152 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 20:07:49.415161 | orchestrator | 2025-06-02 20:07:49.415171 | orchestrator | 2025-06-02 20:07:49.415180 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:07:49.415190 | orchestrator | Monday 02 June 2025 20:07:47 +0000 (0:00:00.218) 0:03:09.280 *********** 2025-06-02 20:07:49.415199 | orchestrator | =============================================================================== 2025-06-02 20:07:49.415209 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.37s 2025-06-02 20:07:49.415218 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.32s 2025-06-02 20:07:49.415233 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.93s 2025-06-02 20:07:49.415243 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.15s 2025-06-02 20:07:49.415253 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.07s 2025-06-02 20:07:49.415262 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.81s 2025-06-02 20:07:49.415271 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.17s 2025-06-02 20:07:49.415280 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.88s 2025-06-02 20:07:49.415290 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.51s 2025-06-02 20:07:49.415299 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.32s 2025-06-02 20:07:49.415308 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.28s 2025-06-02 20:07:49.415317 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.11s 2025-06-02 20:07:49.415327 | orchestrator | Check MariaDB service --------------------------------------------------- 3.00s 2025-06-02 20:07:49.415341 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.85s 2025-06-02 20:07:49.415350 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.76s 2025-06-02 20:07:49.415360 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.67s 2025-06-02 20:07:49.415369 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.62s 2025-06-02 20:07:49.415378 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.47s 2025-06-02 20:07:49.415387 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.41s 2025-06-02 20:07:49.415397 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.26s 2025-06-02 20:07:49.415406 | orchestrator | 2025-06-02 20:07:49 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:49.415415 | orchestrator | 2025-06-02 20:07:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:52.466406 | orchestrator | 2025-06-02 20:07:52 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:07:52.468343 | orchestrator | 2025-06-02 20:07:52 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:07:52.470582 | orchestrator | 2025-06-02 20:07:52 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:52.470744 | orchestrator | 2025-06-02 20:07:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:55.512129 | orchestrator | 2025-06-02 20:07:55 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:07:55.515694 | orchestrator | 2025-06-02 20:07:55 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:07:55.516131 | orchestrator | 2025-06-02 20:07:55 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:55.516147 | orchestrator | 2025-06-02 20:07:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:07:58.570481 | orchestrator | 2025-06-02 20:07:58 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:07:58.572557 | orchestrator | 2025-06-02 20:07:58 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:07:58.574971 | orchestrator | 2025-06-02 20:07:58 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:07:58.575023 | orchestrator | 2025-06-02 20:07:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:01.615203 | orchestrator | 2025-06-02 20:08:01 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:01.615773 | orchestrator | 2025-06-02 20:08:01 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:01.617231 | orchestrator | 2025-06-02 20:08:01 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:01.617276 | orchestrator | 2025-06-02 20:08:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:04.650479 | orchestrator | 2025-06-02 20:08:04 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:04.653728 | orchestrator | 2025-06-02 20:08:04 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:04.655498 | orchestrator | 2025-06-02 20:08:04 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:04.655578 | orchestrator | 2025-06-02 20:08:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:07.698222 | orchestrator | 2025-06-02 20:08:07 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:07.700852 | orchestrator | 2025-06-02 20:08:07 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:07.702498 | orchestrator | 2025-06-02 20:08:07 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:07.702550 | orchestrator | 2025-06-02 20:08:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:10.737440 | orchestrator | 2025-06-02 20:08:10 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:10.737577 | orchestrator | 2025-06-02 20:08:10 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:10.738517 | orchestrator | 2025-06-02 20:08:10 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:10.738622 | orchestrator | 2025-06-02 20:08:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:13.777688 | orchestrator | 2025-06-02 20:08:13 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:13.779136 | orchestrator | 2025-06-02 20:08:13 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:13.780090 | orchestrator | 2025-06-02 20:08:13 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:13.780154 | orchestrator | 2025-06-02 20:08:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:16.814925 | orchestrator | 2025-06-02 20:08:16 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:16.815942 | orchestrator | 2025-06-02 20:08:16 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:16.817371 | orchestrator | 2025-06-02 20:08:16 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:16.817634 | orchestrator | 2025-06-02 20:08:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:19.863405 | orchestrator | 2025-06-02 20:08:19 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:19.863823 | orchestrator | 2025-06-02 20:08:19 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:19.863914 | orchestrator | 2025-06-02 20:08:19 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:19.863926 | orchestrator | 2025-06-02 20:08:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:22.911487 | orchestrator | 2025-06-02 20:08:22 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:22.913424 | orchestrator | 2025-06-02 20:08:22 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:22.915220 | orchestrator | 2025-06-02 20:08:22 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:22.915329 | orchestrator | 2025-06-02 20:08:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:25.978368 | orchestrator | 2025-06-02 20:08:25 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:25.980255 | orchestrator | 2025-06-02 20:08:25 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:25.981998 | orchestrator | 2025-06-02 20:08:25 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:25.982287 | orchestrator | 2025-06-02 20:08:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:29.034470 | orchestrator | 2025-06-02 20:08:29 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:29.035042 | orchestrator | 2025-06-02 20:08:29 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:29.036181 | orchestrator | 2025-06-02 20:08:29 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:29.036286 | orchestrator | 2025-06-02 20:08:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:32.091256 | orchestrator | 2025-06-02 20:08:32 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:32.093170 | orchestrator | 2025-06-02 20:08:32 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:32.094571 | orchestrator | 2025-06-02 20:08:32 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:32.094619 | orchestrator | 2025-06-02 20:08:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:35.140870 | orchestrator | 2025-06-02 20:08:35 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:35.142348 | orchestrator | 2025-06-02 20:08:35 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:35.143836 | orchestrator | 2025-06-02 20:08:35 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:35.143871 | orchestrator | 2025-06-02 20:08:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:38.178989 | orchestrator | 2025-06-02 20:08:38 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:38.179353 | orchestrator | 2025-06-02 20:08:38 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:38.180926 | orchestrator | 2025-06-02 20:08:38 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:38.180988 | orchestrator | 2025-06-02 20:08:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:41.225617 | orchestrator | 2025-06-02 20:08:41 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:41.226582 | orchestrator | 2025-06-02 20:08:41 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:41.229139 | orchestrator | 2025-06-02 20:08:41 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:41.229198 | orchestrator | 2025-06-02 20:08:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:44.288672 | orchestrator | 2025-06-02 20:08:44 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:44.291631 | orchestrator | 2025-06-02 20:08:44 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:44.294925 | orchestrator | 2025-06-02 20:08:44 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:44.294982 | orchestrator | 2025-06-02 20:08:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:47.345185 | orchestrator | 2025-06-02 20:08:47 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:47.347066 | orchestrator | 2025-06-02 20:08:47 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:47.351063 | orchestrator | 2025-06-02 20:08:47 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:47.351102 | orchestrator | 2025-06-02 20:08:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:50.399890 | orchestrator | 2025-06-02 20:08:50 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:50.401025 | orchestrator | 2025-06-02 20:08:50 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:50.403017 | orchestrator | 2025-06-02 20:08:50 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:50.403093 | orchestrator | 2025-06-02 20:08:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:53.448043 | orchestrator | 2025-06-02 20:08:53 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:53.450609 | orchestrator | 2025-06-02 20:08:53 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:53.454776 | orchestrator | 2025-06-02 20:08:53 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:53.454824 | orchestrator | 2025-06-02 20:08:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:56.507720 | orchestrator | 2025-06-02 20:08:56 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:56.510376 | orchestrator | 2025-06-02 20:08:56 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:56.513602 | orchestrator | 2025-06-02 20:08:56 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:56.513646 | orchestrator | 2025-06-02 20:08:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:08:59.553972 | orchestrator | 2025-06-02 20:08:59 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:08:59.557056 | orchestrator | 2025-06-02 20:08:59 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:08:59.559344 | orchestrator | 2025-06-02 20:08:59 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:08:59.559393 | orchestrator | 2025-06-02 20:08:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:02.602209 | orchestrator | 2025-06-02 20:09:02 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:09:02.603006 | orchestrator | 2025-06-02 20:09:02 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:02.604037 | orchestrator | 2025-06-02 20:09:02 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state STARTED 2025-06-02 20:09:02.604074 | orchestrator | 2025-06-02 20:09:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:05.648416 | orchestrator | 2025-06-02 20:09:05 | INFO  | Task e9c0ab87-da7c-4705-8f7a-95c4701d9f42 is in state STARTED 2025-06-02 20:09:05.650617 | orchestrator | 2025-06-02 20:09:05 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:09:05.652859 | orchestrator | 2025-06-02 20:09:05 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:05.658152 | orchestrator | 2025-06-02 20:09:05.658256 | orchestrator | 2025-06-02 20:09:05.658266 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-02 20:09:05.658271 | orchestrator | 2025-06-02 20:09:05.658275 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 20:09:05.658314 | orchestrator | Monday 02 June 2025 20:06:54 +0000 (0:00:00.570) 0:00:00.570 *********** 2025-06-02 20:09:05.658369 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:09:05.658377 | orchestrator | 2025-06-02 20:09:05.658413 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 20:09:05.658421 | orchestrator | Monday 02 June 2025 20:06:55 +0000 (0:00:00.598) 0:00:01.169 *********** 2025-06-02 20:09:05.658427 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.658475 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.658482 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.659414 | orchestrator | 2025-06-02 20:09:05.659429 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 20:09:05.659437 | orchestrator | Monday 02 June 2025 20:06:56 +0000 (0:00:00.617) 0:00:01.787 *********** 2025-06-02 20:09:05.659443 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.659449 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.659456 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.659462 | orchestrator | 2025-06-02 20:09:05.659469 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 20:09:05.659475 | orchestrator | Monday 02 June 2025 20:06:56 +0000 (0:00:00.285) 0:00:02.072 *********** 2025-06-02 20:09:05.659483 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.659489 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.659495 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.659501 | orchestrator | 2025-06-02 20:09:05.659507 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 20:09:05.659513 | orchestrator | Monday 02 June 2025 20:06:57 +0000 (0:00:00.811) 0:00:02.884 *********** 2025-06-02 20:09:05.659519 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.659524 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.659531 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.659537 | orchestrator | 2025-06-02 20:09:05.659543 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 20:09:05.659549 | orchestrator | Monday 02 June 2025 20:06:57 +0000 (0:00:00.282) 0:00:03.167 *********** 2025-06-02 20:09:05.659556 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.659560 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.659564 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.659568 | orchestrator | 2025-06-02 20:09:05.659572 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 20:09:05.659575 | orchestrator | Monday 02 June 2025 20:06:57 +0000 (0:00:00.298) 0:00:03.465 *********** 2025-06-02 20:09:05.659579 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.659596 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.659600 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.659604 | orchestrator | 2025-06-02 20:09:05.659608 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 20:09:05.659612 | orchestrator | Monday 02 June 2025 20:06:58 +0000 (0:00:00.331) 0:00:03.797 *********** 2025-06-02 20:09:05.659616 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.659620 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.659624 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.659628 | orchestrator | 2025-06-02 20:09:05.659631 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 20:09:05.659635 | orchestrator | Monday 02 June 2025 20:06:58 +0000 (0:00:00.477) 0:00:04.275 *********** 2025-06-02 20:09:05.659639 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.659642 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.659646 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.659650 | orchestrator | 2025-06-02 20:09:05.659653 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 20:09:05.659657 | orchestrator | Monday 02 June 2025 20:06:58 +0000 (0:00:00.288) 0:00:04.564 *********** 2025-06-02 20:09:05.659679 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 20:09:05.659685 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:09:05.659691 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:09:05.659697 | orchestrator | 2025-06-02 20:09:05.659703 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 20:09:05.659708 | orchestrator | Monday 02 June 2025 20:06:59 +0000 (0:00:00.639) 0:00:05.204 *********** 2025-06-02 20:09:05.659714 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.659721 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.659756 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.659763 | orchestrator | 2025-06-02 20:09:05.659769 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 20:09:05.659776 | orchestrator | Monday 02 June 2025 20:06:59 +0000 (0:00:00.413) 0:00:05.618 *********** 2025-06-02 20:09:05.659782 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 20:09:05.659788 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:09:05.659794 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:09:05.659801 | orchestrator | 2025-06-02 20:09:05.659807 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 20:09:05.659813 | orchestrator | Monday 02 June 2025 20:07:02 +0000 (0:00:02.185) 0:00:07.803 *********** 2025-06-02 20:09:05.659820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 20:09:05.659826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 20:09:05.659832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 20:09:05.659838 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.659844 | orchestrator | 2025-06-02 20:09:05.659850 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 20:09:05.659893 | orchestrator | Monday 02 June 2025 20:07:02 +0000 (0:00:00.406) 0:00:08.210 *********** 2025-06-02 20:09:05.659903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.659911 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.659918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.659931 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.659938 | orchestrator | 2025-06-02 20:09:05.659944 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 20:09:05.659950 | orchestrator | Monday 02 June 2025 20:07:03 +0000 (0:00:00.802) 0:00:09.013 *********** 2025-06-02 20:09:05.659958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.659966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.659973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.659979 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.659986 | orchestrator | 2025-06-02 20:09:05.659993 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 20:09:05.660000 | orchestrator | Monday 02 June 2025 20:07:03 +0000 (0:00:00.156) 0:00:09.169 *********** 2025-06-02 20:09:05.660058 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '083c7a0651bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 20:07:00.549689', 'end': '2025-06-02 20:07:00.600364', 'delta': '0:00:00.050675', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['083c7a0651bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-02 20:09:05.660099 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fe182c76eb92', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 20:07:01.321976', 'end': '2025-06-02 20:07:01.370187', 'delta': '0:00:00.048211', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fe182c76eb92'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-02 20:09:05.660135 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'af9cfd845014', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 20:07:01.865888', 'end': '2025-06-02 20:07:01.911604', 'delta': '0:00:00.045716', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['af9cfd845014'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-02 20:09:05.660148 | orchestrator | 2025-06-02 20:09:05.660154 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 20:09:05.660161 | orchestrator | Monday 02 June 2025 20:07:03 +0000 (0:00:00.369) 0:00:09.539 *********** 2025-06-02 20:09:05.660167 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.660173 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.660179 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.660184 | orchestrator | 2025-06-02 20:09:05.660190 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 20:09:05.660196 | orchestrator | Monday 02 June 2025 20:07:04 +0000 (0:00:00.443) 0:00:09.982 *********** 2025-06-02 20:09:05.660201 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-02 20:09:05.660208 | orchestrator | 2025-06-02 20:09:05.660213 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 20:09:05.660219 | orchestrator | Monday 02 June 2025 20:07:05 +0000 (0:00:01.635) 0:00:11.618 *********** 2025-06-02 20:09:05.660224 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.660230 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.660267 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.660275 | orchestrator | 2025-06-02 20:09:05.660282 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 20:09:05.660288 | orchestrator | Monday 02 June 2025 20:07:06 +0000 (0:00:00.268) 0:00:11.886 *********** 2025-06-02 20:09:05.660294 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.660300 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.660307 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.660313 | orchestrator | 2025-06-02 20:09:05.660319 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 20:09:05.660325 | orchestrator | Monday 02 June 2025 20:07:06 +0000 (0:00:00.389) 0:00:12.275 *********** 2025-06-02 20:09:05.660332 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.660338 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.660344 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.660350 | orchestrator | 2025-06-02 20:09:05.660356 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 20:09:05.660362 | orchestrator | Monday 02 June 2025 20:07:06 +0000 (0:00:00.366) 0:00:12.642 *********** 2025-06-02 20:09:05.660368 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.660373 | orchestrator | 2025-06-02 20:09:05.660379 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 20:09:05.660385 | orchestrator | Monday 02 June 2025 20:07:06 +0000 (0:00:00.107) 0:00:12.750 *********** 2025-06-02 20:09:05.660391 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.660398 | orchestrator | 2025-06-02 20:09:05.660404 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 20:09:05.660411 | orchestrator | Monday 02 June 2025 20:07:07 +0000 (0:00:00.211) 0:00:12.961 *********** 2025-06-02 20:09:05.660417 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.660423 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.660429 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.660436 | orchestrator | 2025-06-02 20:09:05.660442 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 20:09:05.660448 | orchestrator | Monday 02 June 2025 20:07:07 +0000 (0:00:00.246) 0:00:13.208 *********** 2025-06-02 20:09:05.660455 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.660461 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.660467 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.660478 | orchestrator | 2025-06-02 20:09:05.660485 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 20:09:05.660491 | orchestrator | Monday 02 June 2025 20:07:07 +0000 (0:00:00.279) 0:00:13.487 *********** 2025-06-02 20:09:05.660497 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.660504 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.660510 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.660516 | orchestrator | 2025-06-02 20:09:05.660522 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 20:09:05.660535 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:00.383) 0:00:13.871 *********** 2025-06-02 20:09:05.660542 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.660548 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.660621 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.660627 | orchestrator | 2025-06-02 20:09:05.660633 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 20:09:05.660640 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:00.279) 0:00:14.151 *********** 2025-06-02 20:09:05.660645 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.660652 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.660658 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.660665 | orchestrator | 2025-06-02 20:09:05.660671 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 20:09:05.660677 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:00.282) 0:00:14.433 *********** 2025-06-02 20:09:05.660683 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.660689 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.660695 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.660701 | orchestrator | 2025-06-02 20:09:05.660707 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 20:09:05.660760 | orchestrator | Monday 02 June 2025 20:07:08 +0000 (0:00:00.291) 0:00:14.724 *********** 2025-06-02 20:09:05.660768 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.660804 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.660812 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.660818 | orchestrator | 2025-06-02 20:09:05.660829 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 20:09:05.660835 | orchestrator | Monday 02 June 2025 20:07:09 +0000 (0:00:00.513) 0:00:15.238 *********** 2025-06-02 20:09:05.660843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93e9f309--356a--50f8--bf6b--26db11b00033-osd--block--93e9f309--356a--50f8--bf6b--26db11b00033', 'dm-uuid-LVM-nq5ePTHYYeiXBqOEzKhSv5x7IpcUjKZPc0XaKOILv5EsZsvk4hPA7okc94KObNQM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.660852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01a13ba8--1f69--5051--bec5--e01e7e9b87e5-osd--block--01a13ba8--1f69--5051--bec5--e01e7e9b87e5', 'dm-uuid-LVM-VA9A5JOIOF0zJoCyeskPzSbp7bqOuFcA3Z0dXMzoWiuWDZAX3i6zm9YhOku87Dd4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.660859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.660872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.660879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.660886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.660892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.660920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.660930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.660936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.660945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.660958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93e9f309--356a--50f8--bf6b--26db11b00033-osd--block--93e9f309--356a--50f8--bf6b--26db11b00033'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fGb4mF-dZsm-xEfi-5vlv-eGmP-tK83-iaztIV', 'scsi-0QEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba', 'scsi-SQEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.660985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--01a13ba8--1f69--5051--bec5--e01e7e9b87e5-osd--block--01a13ba8--1f69--5051--bec5--e01e7e9b87e5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I3N9TS-jEbF-egUA-3DLa-bL0J-Gloh-NrjqNb', 'scsi-0QEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40', 'scsi-SQEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.660993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bdb59653--b88e--5628--a878--3ed7677d43f1-osd--block--bdb59653--b88e--5628--a878--3ed7677d43f1', 'dm-uuid-LVM-JnmMTlcXje3zZupdQTnGuJCtXtyKkwfTVXMNK88NfT48uRBqVys3sMrodSxbtxGo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.660999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f', 'scsi-SQEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee20b18c--4531--5b6f--acaf--50beaceb257d-osd--block--ee20b18c--4531--5b6f--acaf--50beaceb257d', 'dm-uuid-LVM-pVYrEMzJRmJqf2kAIqHaSSxrfgvkeBNsYFqWIyK50ay2dEik5sDtbhIdmSMNgg5z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661184 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.661211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part1', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part14', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part15', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part16', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bdb59653--b88e--5628--a878--3ed7677d43f1-osd--block--bdb59653--b88e--5628--a878--3ed7677d43f1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MScA2o-wrwR-cxTI-HSN1-CJaA-ZmWO-duVTze', 'scsi-0QEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee', 'scsi-SQEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ee20b18c--4531--5b6f--acaf--50beaceb257d-osd--block--ee20b18c--4531--5b6f--acaf--50beaceb257d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GWz1kk-3I3I-MJR6-xAen-2SHi-BVgj-e8DG44', 'scsi-0QEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b', 'scsi-SQEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db', 'scsi-SQEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661256 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.661263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--86208513--8fbd--535b--80fd--915c228be133-osd--block--86208513--8fbd--535b--80fd--915c228be133', 'dm-uuid-LVM-AXsfCRSUZ922JqSVuA1OB0lhGcw2SnPS8zh8EFbuCqvp1KxISDIjRi8k1SRymk26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed769c7c--5756--52eb--9583--a607cefce370-osd--block--ed769c7c--5756--52eb--9583--a607cefce370', 'dm-uuid-LVM-YzJgxiVmVv1MohBuCR2yiPf0zwqUhauMGgazaSDH5MQUhPgBeb5aSgAB2yMhXtX5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sect2025-06-02 20:09:05 | INFO  | Task 2155f8bb-bf01-4398-b5f8-fc2aa575344c is in state SUCCESS 2025-06-02 20:09:05.661280 | orchestrator | orsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 20:09:05.661357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part1', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part14', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part15', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part16', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--86208513--8fbd--535b--80fd--915c228be133-osd--block--86208513--8fbd--535b--80fd--915c228be133'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PwbXIf-nbiY-VZEp-Jwyt-8O2F-GGtW-x9I8wZ', 'scsi-0QEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b', 'scsi-SQEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ed769c7c--5756--52eb--9583--a607cefce370-osd--block--ed769c7c--5756--52eb--9583--a607cefce370'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i42UGR-1M23-JeND-WGwO-3Hx7-Q2xw-qnnSe5', 'scsi-0QEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8', 'scsi-SQEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb', 'scsi-SQEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 20:09:05.661403 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.661410 | orchestrator | 2025-06-02 20:09:05.661416 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 20:09:05.661422 | orchestrator | Monday 02 June 2025 20:07:10 +0000 (0:00:00.607) 0:00:15.845 *********** 2025-06-02 20:09:05.661432 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93e9f309--356a--50f8--bf6b--26db11b00033-osd--block--93e9f309--356a--50f8--bf6b--26db11b00033', 'dm-uuid-LVM-nq5ePTHYYeiXBqOEzKhSv5x7IpcUjKZPc0XaKOILv5EsZsvk4hPA7okc94KObNQM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01a13ba8--1f69--5051--bec5--e01e7e9b87e5-osd--block--01a13ba8--1f69--5051--bec5--e01e7e9b87e5', 'dm-uuid-LVM-VA9A5JOIOF0zJoCyeskPzSbp7bqOuFcA3Z0dXMzoWiuWDZAX3i6zm9YhOku87Dd4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661446 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661452 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661482 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661488 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661524 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661555 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bdb59653--b88e--5628--a878--3ed7677d43f1-osd--block--bdb59653--b88e--5628--a878--3ed7677d43f1', 'dm-uuid-LVM-JnmMTlcXje3zZupdQTnGuJCtXtyKkwfTVXMNK88NfT48uRBqVys3sMrodSxbtxGo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee20b18c--4531--5b6f--acaf--50beaceb257d-osd--block--ee20b18c--4531--5b6f--acaf--50beaceb257d', 'dm-uuid-LVM-pVYrEMzJRmJqf2kAIqHaSSxrfgvkeBNsYFqWIyK50ay2dEik5sDtbhIdmSMNgg5z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661627 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part1', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part14', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part15', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part16', 'scsi-SQEMU_QEMU_HARDDISK_3b3451ec-fae9-4227-a22d-4a5dda6aaaab-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661647 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--93e9f309--356a--50f8--bf6b--26db11b00033-osd--block--93e9f309--356a--50f8--bf6b--26db11b00033'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fGb4mF-dZsm-xEfi-5vlv-eGmP-tK83-iaztIV', 'scsi-0QEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba', 'scsi-SQEMU_QEMU_HARDDISK_f90c13d8-18de-4224-a0ec-2fb9bc967aba'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661662 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--01a13ba8--1f69--5051--bec5--e01e7e9b87e5-osd--block--01a13ba8--1f69--5051--bec5--e01e7e9b87e5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I3N9TS-jEbF-egUA-3DLa-bL0J-Gloh-NrjqNb', 'scsi-0QEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40', 'scsi-SQEMU_QEMU_HARDDISK_31522631-626d-4eab-bbf4-d80ec429ee40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661676 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f', 'scsi-SQEMU_QEMU_HARDDISK_2edf9efd-121b-4ff6-b6f5-d420782ba04f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661689 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661717 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.661724 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661768 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661773 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661777 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--86208513--8fbd--535b--80fd--915c228be133-osd--block--86208513--8fbd--535b--80fd--915c228be133', 'dm-uuid-LVM-AXsfCRSUZ922JqSVuA1OB0lhGcw2SnPS8zh8EFbuCqvp1KxISDIjRi8k1SRymk26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661793 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part1', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part14', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part15', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part16', 'scsi-SQEMU_QEMU_HARDDISK_453458da-4d99-4de0-a2fa-ec8f657b9d69-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed769c7c--5756--52eb--9583--a607cefce370-osd--block--ed769c7c--5756--52eb--9583--a607cefce370', 'dm-uuid-LVM-YzJgxiVmVv1MohBuCR2yiPf0zwqUhauMGgazaSDH5MQUhPgBeb5aSgAB2yMhXtX5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661807 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bdb59653--b88e--5628--a878--3ed7677d43f1-osd--block--bdb59653--b88e--5628--a878--3ed7677d43f1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MScA2o-wrwR-cxTI-HSN1-CJaA-ZmWO-duVTze', 'scsi-0QEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee', 'scsi-SQEMU_QEMU_HARDDISK_56067267-e29e-4b33-bc58-6a568e4c77ee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661814 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ee20b18c--4531--5b6f--acaf--50beaceb257d-osd--block--ee20b18c--4531--5b6f--acaf--50beaceb257d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GWz1kk-3I3I-MJR6-xAen-2SHi-BVgj-e8DG44', 'scsi-0QEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b', 'scsi-SQEMU_QEMU_HARDDISK_afb213e9-57a6-474d-a5f5-62ab693fc54b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661832 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db', 'scsi-SQEMU_QEMU_HARDDISK_b537626e-57d0-4db8-bc93-475b5479d5db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661836 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661840 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661894 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.661937 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661945 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661951 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661957 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661978 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part1', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part14', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part15', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part16', 'scsi-SQEMU_QEMU_HARDDISK_08b58ced-3c5b-405c-ae09-2d18558cfc25-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661990 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--86208513--8fbd--535b--80fd--915c228be133-osd--block--86208513--8fbd--535b--80fd--915c228be133'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PwbXIf-nbiY-VZEp-Jwyt-8O2F-GGtW-x9I8wZ', 'scsi-0QEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b', 'scsi-SQEMU_QEMU_HARDDISK_fe4e5841-a5c6-4e3c-a2f3-c1ddedcfef3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.661997 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ed769c7c--5756--52eb--9583--a607cefce370-osd--block--ed769c7c--5756--52eb--9583--a607cefce370'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i42UGR-1M23-JeND-WGwO-3Hx7-Q2xw-qnnSe5', 'scsi-0QEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8', 'scsi-SQEMU_QEMU_HARDDISK_17194968-3402-4871-a3b7-d8b4dd3032d8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.662004 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb', 'scsi-SQEMU_QEMU_HARDDISK_3a83bf91-153f-49f3-b384-9ce8856c05fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.662058 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-19-17-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 20:09:05.662067 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.662073 | orchestrator | 2025-06-02 20:09:05.662080 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 20:09:05.662086 | orchestrator | Monday 02 June 2025 20:07:10 +0000 (0:00:00.581) 0:00:16.427 *********** 2025-06-02 20:09:05.662091 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.662097 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.662103 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.662109 | orchestrator | 2025-06-02 20:09:05.662115 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 20:09:05.662119 | orchestrator | Monday 02 June 2025 20:07:11 +0000 (0:00:00.710) 0:00:17.138 *********** 2025-06-02 20:09:05.662123 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.662127 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.662130 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.662134 | orchestrator | 2025-06-02 20:09:05.662138 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 20:09:05.662142 | orchestrator | Monday 02 June 2025 20:07:11 +0000 (0:00:00.467) 0:00:17.606 *********** 2025-06-02 20:09:05.662146 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.662151 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.662157 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.662164 | orchestrator | 2025-06-02 20:09:05.662170 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 20:09:05.662176 | orchestrator | Monday 02 June 2025 20:07:12 +0000 (0:00:00.656) 0:00:18.262 *********** 2025-06-02 20:09:05.662182 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.662188 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.662193 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.662200 | orchestrator | 2025-06-02 20:09:05.662206 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 20:09:05.662213 | orchestrator | Monday 02 June 2025 20:07:12 +0000 (0:00:00.322) 0:00:18.585 *********** 2025-06-02 20:09:05.662219 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.662225 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.662231 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.662237 | orchestrator | 2025-06-02 20:09:05.662243 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 20:09:05.662250 | orchestrator | Monday 02 June 2025 20:07:13 +0000 (0:00:00.430) 0:00:19.016 *********** 2025-06-02 20:09:05.662264 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.662271 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.662277 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.662290 | orchestrator | 2025-06-02 20:09:05.662296 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 20:09:05.662303 | orchestrator | Monday 02 June 2025 20:07:13 +0000 (0:00:00.515) 0:00:19.531 *********** 2025-06-02 20:09:05.662309 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 20:09:05.662316 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 20:09:05.662322 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 20:09:05.662328 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 20:09:05.662334 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 20:09:05.662340 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 20:09:05.662346 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 20:09:05.662352 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 20:09:05.662358 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 20:09:05.662364 | orchestrator | 2025-06-02 20:09:05.662371 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 20:09:05.662377 | orchestrator | Monday 02 June 2025 20:07:14 +0000 (0:00:00.836) 0:00:20.367 *********** 2025-06-02 20:09:05.662413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 20:09:05.662421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 20:09:05.662427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 20:09:05.662434 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.662440 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 20:09:05.662447 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 20:09:05.662454 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 20:09:05.662461 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.662467 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 20:09:05.662473 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 20:09:05.662480 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 20:09:05.662487 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.662493 | orchestrator | 2025-06-02 20:09:05.662500 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 20:09:05.662508 | orchestrator | Monday 02 June 2025 20:07:14 +0000 (0:00:00.343) 0:00:20.711 *********** 2025-06-02 20:09:05.662515 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:09:05.662522 | orchestrator | 2025-06-02 20:09:05.662529 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 20:09:05.662537 | orchestrator | Monday 02 June 2025 20:07:15 +0000 (0:00:00.680) 0:00:21.392 *********** 2025-06-02 20:09:05.662550 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.662558 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.662564 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.662571 | orchestrator | 2025-06-02 20:09:05.662578 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 20:09:05.662589 | orchestrator | Monday 02 June 2025 20:07:15 +0000 (0:00:00.326) 0:00:21.718 *********** 2025-06-02 20:09:05.662595 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.662602 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.662609 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.662616 | orchestrator | 2025-06-02 20:09:05.662622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 20:09:05.662629 | orchestrator | Monday 02 June 2025 20:07:16 +0000 (0:00:00.321) 0:00:22.040 *********** 2025-06-02 20:09:05.662636 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.662643 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.662650 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:09:05.662662 | orchestrator | 2025-06-02 20:09:05.662669 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 20:09:05.662676 | orchestrator | Monday 02 June 2025 20:07:16 +0000 (0:00:00.313) 0:00:22.353 *********** 2025-06-02 20:09:05.662683 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.662690 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.662697 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.662704 | orchestrator | 2025-06-02 20:09:05.662711 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 20:09:05.662718 | orchestrator | Monday 02 June 2025 20:07:17 +0000 (0:00:00.637) 0:00:22.990 *********** 2025-06-02 20:09:05.662725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:09:05.662755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:09:05.662761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:09:05.662765 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.662769 | orchestrator | 2025-06-02 20:09:05.662773 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 20:09:05.662777 | orchestrator | Monday 02 June 2025 20:07:17 +0000 (0:00:00.378) 0:00:23.369 *********** 2025-06-02 20:09:05.662780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:09:05.662784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:09:05.662788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:09:05.662792 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.662795 | orchestrator | 2025-06-02 20:09:05.662799 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 20:09:05.662803 | orchestrator | Monday 02 June 2025 20:07:17 +0000 (0:00:00.379) 0:00:23.748 *********** 2025-06-02 20:09:05.662807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 20:09:05.662810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 20:09:05.662814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 20:09:05.662818 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.662822 | orchestrator | 2025-06-02 20:09:05.662825 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 20:09:05.662829 | orchestrator | Monday 02 June 2025 20:07:18 +0000 (0:00:00.382) 0:00:24.130 *********** 2025-06-02 20:09:05.662833 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:09:05.662837 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:09:05.662843 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:09:05.662849 | orchestrator | 2025-06-02 20:09:05.662855 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 20:09:05.662860 | orchestrator | Monday 02 June 2025 20:07:18 +0000 (0:00:00.336) 0:00:24.466 *********** 2025-06-02 20:09:05.662866 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 20:09:05.662873 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 20:09:05.662879 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 20:09:05.662885 | orchestrator | 2025-06-02 20:09:05.662892 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 20:09:05.662898 | orchestrator | Monday 02 June 2025 20:07:19 +0000 (0:00:00.496) 0:00:24.963 *********** 2025-06-02 20:09:05.662904 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 20:09:05.662911 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:09:05.662917 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:09:05.662923 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 20:09:05.662929 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 20:09:05.662935 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 20:09:05.662947 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 20:09:05.662953 | orchestrator | 2025-06-02 20:09:05.662960 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 20:09:05.662966 | orchestrator | Monday 02 June 2025 20:07:20 +0000 (0:00:00.977) 0:00:25.940 *********** 2025-06-02 20:09:05.662972 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 20:09:05.662979 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 20:09:05.662985 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 20:09:05.662990 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 20:09:05.662997 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 20:09:05.663003 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 20:09:05.663013 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 20:09:05.663019 | orchestrator | 2025-06-02 20:09:05.663025 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-02 20:09:05.663038 | orchestrator | Monday 02 June 2025 20:07:22 +0000 (0:00:02.016) 0:00:27.957 *********** 2025-06-02 20:09:05.663045 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:09:05.663051 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:09:05.663057 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-02 20:09:05.663064 | orchestrator | 2025-06-02 20:09:05.663070 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-02 20:09:05.663075 | orchestrator | Monday 02 June 2025 20:07:22 +0000 (0:00:00.365) 0:00:28.322 *********** 2025-06-02 20:09:05.663083 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:09:05.663091 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:09:05.663097 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:09:05.663104 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:09:05.663110 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 20:09:05.663116 | orchestrator | 2025-06-02 20:09:05.663123 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-02 20:09:05.663129 | orchestrator | Monday 02 June 2025 20:08:08 +0000 (0:00:45.566) 0:01:13.889 *********** 2025-06-02 20:09:05.663135 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663141 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663148 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663159 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663166 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663178 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-02 20:09:05.663184 | orchestrator | 2025-06-02 20:09:05.663190 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-02 20:09:05.663197 | orchestrator | Monday 02 June 2025 20:08:32 +0000 (0:00:24.848) 0:01:38.738 *********** 2025-06-02 20:09:05.663203 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663209 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663216 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663222 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663228 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663234 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663241 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 20:09:05.663247 | orchestrator | 2025-06-02 20:09:05.663252 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-02 20:09:05.663258 | orchestrator | Monday 02 June 2025 20:08:45 +0000 (0:00:12.767) 0:01:51.505 *********** 2025-06-02 20:09:05.663264 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663270 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:09:05.663277 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:09:05.663283 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663294 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:09:05.663300 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:09:05.663306 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663315 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:09:05.663322 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:09:05.663328 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663334 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:09:05.663341 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:09:05.663347 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663354 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:09:05.663360 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:09:05.663366 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 20:09:05.663372 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 20:09:05.663378 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 20:09:05.663385 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-02 20:09:05.663391 | orchestrator | 2025-06-02 20:09:05.663397 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:09:05.663404 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-02 20:09:05.663416 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-02 20:09:05.663423 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 20:09:05.663428 | orchestrator | 2025-06-02 20:09:05.663434 | orchestrator | 2025-06-02 20:09:05.663440 | orchestrator | 2025-06-02 20:09:05.663447 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:09:05.663453 | orchestrator | Monday 02 June 2025 20:09:03 +0000 (0:00:17.646) 0:02:09.152 *********** 2025-06-02 20:09:05.663459 | orchestrator | =============================================================================== 2025-06-02 20:09:05.663465 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.57s 2025-06-02 20:09:05.663471 | orchestrator | generate keys ---------------------------------------------------------- 24.85s 2025-06-02 20:09:05.663477 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.65s 2025-06-02 20:09:05.663483 | orchestrator | get keys from monitors ------------------------------------------------- 12.77s 2025-06-02 20:09:05.663488 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.19s 2025-06-02 20:09:05.663494 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.02s 2025-06-02 20:09:05.663501 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.64s 2025-06-02 20:09:05.663507 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.98s 2025-06-02 20:09:05.663513 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.84s 2025-06-02 20:09:05.663520 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2025-06-02 20:09:05.663526 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2025-06-02 20:09:05.663533 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.71s 2025-06-02 20:09:05.663539 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.68s 2025-06-02 20:09:05.663545 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2025-06-02 20:09:05.663551 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2025-06-02 20:09:05.663557 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.64s 2025-06-02 20:09:05.663563 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.62s 2025-06-02 20:09:05.663569 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.61s 2025-06-02 20:09:05.663576 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.60s 2025-06-02 20:09:05.663582 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2025-06-02 20:09:05.663589 | orchestrator | 2025-06-02 20:09:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:08.703897 | orchestrator | 2025-06-02 20:09:08 | INFO  | Task e9c0ab87-da7c-4705-8f7a-95c4701d9f42 is in state STARTED 2025-06-02 20:09:08.705899 | orchestrator | 2025-06-02 20:09:08 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:09:08.708430 | orchestrator | 2025-06-02 20:09:08 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:08.708495 | orchestrator | 2025-06-02 20:09:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:11.758571 | orchestrator | 2025-06-02 20:09:11 | INFO  | Task e9c0ab87-da7c-4705-8f7a-95c4701d9f42 is in state STARTED 2025-06-02 20:09:11.761249 | orchestrator | 2025-06-02 20:09:11 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:09:11.763833 | orchestrator | 2025-06-02 20:09:11 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:11.763968 | orchestrator | 2025-06-02 20:09:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:14.812289 | orchestrator | 2025-06-02 20:09:14 | INFO  | Task e9c0ab87-da7c-4705-8f7a-95c4701d9f42 is in state STARTED 2025-06-02 20:09:14.813139 | orchestrator | 2025-06-02 20:09:14 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:09:14.814175 | orchestrator | 2025-06-02 20:09:14 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:14.814202 | orchestrator | 2025-06-02 20:09:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:17.864071 | orchestrator | 2025-06-02 20:09:17 | INFO  | Task e9c0ab87-da7c-4705-8f7a-95c4701d9f42 is in state STARTED 2025-06-02 20:09:17.865463 | orchestrator | 2025-06-02 20:09:17 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:09:17.867499 | orchestrator | 2025-06-02 20:09:17 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:17.867549 | orchestrator | 2025-06-02 20:09:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:20.918288 | orchestrator | 2025-06-02 20:09:20 | INFO  | Task e9c0ab87-da7c-4705-8f7a-95c4701d9f42 is in state STARTED 2025-06-02 20:09:20.919916 | orchestrator | 2025-06-02 20:09:20 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:09:20.920975 | orchestrator | 2025-06-02 20:09:20 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:20.921013 | orchestrator | 2025-06-02 20:09:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:23.958588 | orchestrator | 2025-06-02 20:09:23 | INFO  | Task e9c0ab87-da7c-4705-8f7a-95c4701d9f42 is in state STARTED 2025-06-02 20:09:23.960921 | orchestrator | 2025-06-02 20:09:23 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:09:23.963600 | orchestrator | 2025-06-02 20:09:23 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:23.963649 | orchestrator | 2025-06-02 20:09:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:27.025791 | orchestrator | 2025-06-02 20:09:27 | INFO  | Task e9c0ab87-da7c-4705-8f7a-95c4701d9f42 is in state STARTED 2025-06-02 20:09:27.028647 | orchestrator | 2025-06-02 20:09:27 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state STARTED 2025-06-02 20:09:27.031171 | orchestrator | 2025-06-02 20:09:27 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:27.031199 | orchestrator | 2025-06-02 20:09:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:30.072356 | orchestrator | 2025-06-02 20:09:30 | INFO  | Task e9c0ab87-da7c-4705-8f7a-95c4701d9f42 is in state STARTED 2025-06-02 20:09:30.076116 | orchestrator | 2025-06-02 20:09:30 | INFO  | Task b9bd58c0-51ef-4a4d-8566-c45d47a9a927 is in state SUCCESS 2025-06-02 20:09:30.077361 | orchestrator | 2025-06-02 20:09:30.077403 | orchestrator | 2025-06-02 20:09:30.077416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:09:30.077430 | orchestrator | 2025-06-02 20:09:30.077444 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:09:30.077456 | orchestrator | Monday 02 June 2025 20:07:51 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-06-02 20:09:30.077470 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.077484 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.077497 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.077509 | orchestrator | 2025-06-02 20:09:30.077521 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:09:30.077560 | orchestrator | Monday 02 June 2025 20:07:52 +0000 (0:00:00.300) 0:00:00.558 *********** 2025-06-02 20:09:30.077572 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-02 20:09:30.077585 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-02 20:09:30.077597 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-02 20:09:30.077608 | orchestrator | 2025-06-02 20:09:30.077619 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-02 20:09:30.077630 | orchestrator | 2025-06-02 20:09:30.077642 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 20:09:30.077654 | orchestrator | Monday 02 June 2025 20:07:52 +0000 (0:00:00.387) 0:00:00.946 *********** 2025-06-02 20:09:30.077661 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:09:30.077669 | orchestrator | 2025-06-02 20:09:30.077676 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-02 20:09:30.077683 | orchestrator | Monday 02 June 2025 20:07:53 +0000 (0:00:00.477) 0:00:01.424 *********** 2025-06-02 20:09:30.077728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:09:30.077769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:09:30.077866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:09:30.077881 | orchestrator | 2025-06-02 20:09:30.077892 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-02 20:09:30.077904 | orchestrator | Monday 02 June 2025 20:07:54 +0000 (0:00:01.113) 0:00:02.538 *********** 2025-06-02 20:09:30.077915 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.078004 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.078324 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.078340 | orchestrator | 2025-06-02 20:09:30.078349 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 20:09:30.078356 | orchestrator | Monday 02 June 2025 20:07:54 +0000 (0:00:00.411) 0:00:02.949 *********** 2025-06-02 20:09:30.078372 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 20:09:30.078379 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 20:09:30.078385 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 20:09:30.078392 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 20:09:30.078398 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 20:09:30.078404 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 20:09:30.078410 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-02 20:09:30.078416 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 20:09:30.078422 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 20:09:30.078428 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 20:09:30.078434 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 20:09:30.078530 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 20:09:30.078540 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 20:09:30.078550 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 20:09:30.078561 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-02 20:09:30.078571 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 20:09:30.078589 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 20:09:30.078599 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 20:09:30.078610 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 20:09:30.078683 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 20:09:30.078697 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 20:09:30.078785 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 20:09:30.078798 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-02 20:09:30.078808 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 20:09:30.078818 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-02 20:09:30.078830 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-02 20:09:30.078840 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-02 20:09:30.078850 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-02 20:09:30.078858 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-02 20:09:30.078868 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-02 20:09:30.078888 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-02 20:09:30.078913 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-02 20:09:30.078922 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-02 20:09:30.078934 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-02 20:09:30.078955 | orchestrator | 2025-06-02 20:09:30.078966 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:09:30.078976 | orchestrator | Monday 02 June 2025 20:07:55 +0000 (0:00:00.718) 0:00:03.667 *********** 2025-06-02 20:09:30.078985 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.078996 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.079007 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.079017 | orchestrator | 2025-06-02 20:09:30.079027 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:09:30.079037 | orchestrator | Monday 02 June 2025 20:07:55 +0000 (0:00:00.297) 0:00:03.964 *********** 2025-06-02 20:09:30.079058 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079066 | orchestrator | 2025-06-02 20:09:30.079072 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:09:30.079078 | orchestrator | Monday 02 June 2025 20:07:55 +0000 (0:00:00.105) 0:00:04.070 *********** 2025-06-02 20:09:30.079085 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079091 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.079097 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.079103 | orchestrator | 2025-06-02 20:09:30.079109 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:09:30.079115 | orchestrator | Monday 02 June 2025 20:07:56 +0000 (0:00:00.438) 0:00:04.508 *********** 2025-06-02 20:09:30.079122 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.079128 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.079134 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.079140 | orchestrator | 2025-06-02 20:09:30.079146 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:09:30.079152 | orchestrator | Monday 02 June 2025 20:07:56 +0000 (0:00:00.285) 0:00:04.794 *********** 2025-06-02 20:09:30.079158 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079164 | orchestrator | 2025-06-02 20:09:30.079170 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:09:30.079176 | orchestrator | Monday 02 June 2025 20:07:56 +0000 (0:00:00.132) 0:00:04.926 *********** 2025-06-02 20:09:30.079182 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079188 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.079195 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.079201 | orchestrator | 2025-06-02 20:09:30.079207 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:09:30.079213 | orchestrator | Monday 02 June 2025 20:07:56 +0000 (0:00:00.275) 0:00:05.202 *********** 2025-06-02 20:09:30.079219 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.079225 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.079231 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.079237 | orchestrator | 2025-06-02 20:09:30.079253 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:09:30.079266 | orchestrator | Monday 02 June 2025 20:07:57 +0000 (0:00:00.282) 0:00:05.485 *********** 2025-06-02 20:09:30.079282 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079292 | orchestrator | 2025-06-02 20:09:30.079311 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:09:30.079321 | orchestrator | Monday 02 June 2025 20:07:57 +0000 (0:00:00.312) 0:00:05.797 *********** 2025-06-02 20:09:30.079331 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079341 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.079352 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.079361 | orchestrator | 2025-06-02 20:09:30.079372 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:09:30.079383 | orchestrator | Monday 02 June 2025 20:07:57 +0000 (0:00:00.314) 0:00:06.112 *********** 2025-06-02 20:09:30.079390 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.079398 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.079405 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.079413 | orchestrator | 2025-06-02 20:09:30.079420 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:09:30.079427 | orchestrator | Monday 02 June 2025 20:07:57 +0000 (0:00:00.294) 0:00:06.406 *********** 2025-06-02 20:09:30.079434 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079441 | orchestrator | 2025-06-02 20:09:30.079449 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:09:30.079456 | orchestrator | Monday 02 June 2025 20:07:58 +0000 (0:00:00.124) 0:00:06.530 *********** 2025-06-02 20:09:30.079463 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079470 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.079477 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.079484 | orchestrator | 2025-06-02 20:09:30.079491 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:09:30.079499 | orchestrator | Monday 02 June 2025 20:07:58 +0000 (0:00:00.296) 0:00:06.826 *********** 2025-06-02 20:09:30.079506 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.079514 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.079521 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.079529 | orchestrator | 2025-06-02 20:09:30.079536 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:09:30.079543 | orchestrator | Monday 02 June 2025 20:07:58 +0000 (0:00:00.504) 0:00:07.331 *********** 2025-06-02 20:09:30.079551 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079558 | orchestrator | 2025-06-02 20:09:30.079565 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:09:30.079572 | orchestrator | Monday 02 June 2025 20:07:59 +0000 (0:00:00.148) 0:00:07.479 *********** 2025-06-02 20:09:30.079580 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079587 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.079595 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.079602 | orchestrator | 2025-06-02 20:09:30.079609 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:09:30.079615 | orchestrator | Monday 02 June 2025 20:07:59 +0000 (0:00:00.298) 0:00:07.778 *********** 2025-06-02 20:09:30.079621 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.079627 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.079634 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.079640 | orchestrator | 2025-06-02 20:09:30.079646 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:09:30.079653 | orchestrator | Monday 02 June 2025 20:07:59 +0000 (0:00:00.318) 0:00:08.096 *********** 2025-06-02 20:09:30.079663 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079672 | orchestrator | 2025-06-02 20:09:30.079688 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:09:30.079701 | orchestrator | Monday 02 June 2025 20:07:59 +0000 (0:00:00.123) 0:00:08.220 *********** 2025-06-02 20:09:30.079728 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079737 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.079746 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.079755 | orchestrator | 2025-06-02 20:09:30.079764 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:09:30.079803 | orchestrator | Monday 02 June 2025 20:08:00 +0000 (0:00:00.459) 0:00:08.680 *********** 2025-06-02 20:09:30.079814 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.079826 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.079835 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.079845 | orchestrator | 2025-06-02 20:09:30.079855 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:09:30.079875 | orchestrator | Monday 02 June 2025 20:08:00 +0000 (0:00:00.307) 0:00:08.987 *********** 2025-06-02 20:09:30.079886 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079896 | orchestrator | 2025-06-02 20:09:30.079906 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:09:30.079916 | orchestrator | Monday 02 June 2025 20:08:00 +0000 (0:00:00.120) 0:00:09.108 *********** 2025-06-02 20:09:30.079928 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.079938 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.079948 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.079957 | orchestrator | 2025-06-02 20:09:30.079963 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:09:30.079969 | orchestrator | Monday 02 June 2025 20:08:00 +0000 (0:00:00.273) 0:00:09.382 *********** 2025-06-02 20:09:30.079975 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.079982 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.079988 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.079994 | orchestrator | 2025-06-02 20:09:30.080000 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:09:30.080006 | orchestrator | Monday 02 June 2025 20:08:01 +0000 (0:00:00.287) 0:00:09.669 *********** 2025-06-02 20:09:30.080012 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.080018 | orchestrator | 2025-06-02 20:09:30.080024 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:09:30.080030 | orchestrator | Monday 02 June 2025 20:08:01 +0000 (0:00:00.118) 0:00:09.788 *********** 2025-06-02 20:09:30.080036 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.080042 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.080054 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.080061 | orchestrator | 2025-06-02 20:09:30.080067 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:09:30.080073 | orchestrator | Monday 02 June 2025 20:08:01 +0000 (0:00:00.482) 0:00:10.270 *********** 2025-06-02 20:09:30.080079 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.080085 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.080091 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.080097 | orchestrator | 2025-06-02 20:09:30.080103 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:09:30.080109 | orchestrator | Monday 02 June 2025 20:08:02 +0000 (0:00:00.297) 0:00:10.568 *********** 2025-06-02 20:09:30.080115 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.080121 | orchestrator | 2025-06-02 20:09:30.080127 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:09:30.080133 | orchestrator | Monday 02 June 2025 20:08:02 +0000 (0:00:00.127) 0:00:10.695 *********** 2025-06-02 20:09:30.080139 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.080145 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.080151 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.080157 | orchestrator | 2025-06-02 20:09:30.080163 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 20:09:30.080169 | orchestrator | Monday 02 June 2025 20:08:02 +0000 (0:00:00.272) 0:00:10.967 *********** 2025-06-02 20:09:30.080175 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:09:30.080181 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:09:30.080187 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:09:30.080193 | orchestrator | 2025-06-02 20:09:30.080199 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 20:09:30.080211 | orchestrator | Monday 02 June 2025 20:08:03 +0000 (0:00:00.481) 0:00:11.449 *********** 2025-06-02 20:09:30.080217 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.080224 | orchestrator | 2025-06-02 20:09:30.080230 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 20:09:30.080236 | orchestrator | Monday 02 June 2025 20:08:03 +0000 (0:00:00.125) 0:00:11.575 *********** 2025-06-02 20:09:30.080242 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.080248 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.080254 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.080260 | orchestrator | 2025-06-02 20:09:30.080267 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-02 20:09:30.080273 | orchestrator | Monday 02 June 2025 20:08:03 +0000 (0:00:00.289) 0:00:11.864 *********** 2025-06-02 20:09:30.080279 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:09:30.080285 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:09:30.080291 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:09:30.080297 | orchestrator | 2025-06-02 20:09:30.080303 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-02 20:09:30.080309 | orchestrator | Monday 02 June 2025 20:08:04 +0000 (0:00:01.540) 0:00:13.404 *********** 2025-06-02 20:09:30.080316 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 20:09:30.080322 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 20:09:30.080328 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 20:09:30.080334 | orchestrator | 2025-06-02 20:09:30.080340 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-02 20:09:30.080346 | orchestrator | Monday 02 June 2025 20:08:07 +0000 (0:00:02.014) 0:00:15.419 *********** 2025-06-02 20:09:30.080353 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 20:09:30.080360 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 20:09:30.080366 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 20:09:30.080372 | orchestrator | 2025-06-02 20:09:30.080383 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-02 20:09:30.080390 | orchestrator | Monday 02 June 2025 20:08:09 +0000 (0:00:02.199) 0:00:17.619 *********** 2025-06-02 20:09:30.080396 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 20:09:30.080402 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 20:09:30.080408 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 20:09:30.080414 | orchestrator | 2025-06-02 20:09:30.080421 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-02 20:09:30.080427 | orchestrator | Monday 02 June 2025 20:08:10 +0000 (0:00:01.590) 0:00:19.210 *********** 2025-06-02 20:09:30.080433 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.080439 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.080456 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.080462 | orchestrator | 2025-06-02 20:09:30.080476 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-02 20:09:30.080482 | orchestrator | Monday 02 June 2025 20:08:11 +0000 (0:00:00.319) 0:00:19.529 *********** 2025-06-02 20:09:30.080488 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.080495 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.080501 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.080507 | orchestrator | 2025-06-02 20:09:30.080513 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 20:09:30.080519 | orchestrator | Monday 02 June 2025 20:08:11 +0000 (0:00:00.280) 0:00:19.810 *********** 2025-06-02 20:09:30.080529 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:09:30.080535 | orchestrator | 2025-06-02 20:09:30.080545 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-02 20:09:30.080551 | orchestrator | Monday 02 June 2025 20:08:12 +0000 (0:00:00.747) 0:00:20.557 *********** 2025-06-02 20:09:30.080560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:09:30.080586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:09:30.080599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:09:30.080606 | orchestrator | 2025-06-02 20:09:30.080612 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-02 20:09:30.080619 | orchestrator | Monday 02 June 2025 20:08:13 +0000 (0:00:01.445) 0:00:22.003 *********** 2025-06-02 20:09:30.080634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:09:30.080646 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.080658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:09:30.080665 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.080676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:09:30.080687 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.080693 | orchestrator | 2025-06-02 20:09:30.080700 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-02 20:09:30.080706 | orchestrator | Monday 02 June 2025 20:08:14 +0000 (0:00:00.545) 0:00:22.549 *********** 2025-06-02 20:09:30.080735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:09:30.080751 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.080761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:09:30.080768 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.080780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 20:09:30.080792 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.080799 | orchestrator | 2025-06-02 20:09:30.080805 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-02 20:09:30.080811 | orchestrator | Monday 02 June 2025 20:08:14 +0000 (0:00:00.869) 0:00:23.418 *********** 2025-06-02 20:09:30.080821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:09:30.080838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:09:30.080850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 20:09:30.080857 | orchestrator | 2025-06-02 20:09:30.080863 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 20:09:30.080869 | orchestrator | Monday 02 June 2025 20:08:16 +0000 (0:00:01.116) 0:00:24.535 *********** 2025-06-02 20:09:30.080876 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:09:30.080882 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:09:30.080888 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:09:30.080894 | orchestrator | 2025-06-02 20:09:30.080900 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 20:09:30.080917 | orchestrator | Monday 02 June 2025 20:08:16 +0000 (0:00:00.249) 0:00:24.784 *********** 2025-06-02 20:09:30.080924 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:09:30.080930 | orchestrator | 2025-06-02 20:09:30.080936 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-02 20:09:30.080943 | orchestrator | Monday 02 June 2025 20:08:16 +0000 (0:00:00.541) 0:00:25.325 *********** 2025-06-02 20:09:30.080949 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:09:30.080955 | orchestrator | 2025-06-02 20:09:30.080961 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-02 20:09:30.080967 | orchestrator | Monday 02 June 2025 20:08:19 +0000 (0:00:02.142) 0:00:27.467 *********** 2025-06-02 20:09:30.080973 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:09:30.080979 | orchestrator | 2025-06-02 20:09:30.080985 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-02 20:09:30.080991 | orchestrator | Monday 02 June 2025 20:08:21 +0000 (0:00:02.069) 0:00:29.537 *********** 2025-06-02 20:09:30.080997 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:09:30.081003 | orchestrator | 2025-06-02 20:09:30.081009 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 20:09:30.081016 | orchestrator | Monday 02 June 2025 20:08:37 +0000 (0:00:16.383) 0:00:45.920 *********** 2025-06-02 20:09:30.081022 | orchestrator | 2025-06-02 20:09:30.081028 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 20:09:30.081034 | orchestrator | Monday 02 June 2025 20:08:37 +0000 (0:00:00.065) 0:00:45.986 *********** 2025-06-02 20:09:30.081040 | orchestrator | 2025-06-02 20:09:30.081046 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 20:09:30.081052 | orchestrator | Monday 02 June 2025 20:08:37 +0000 (0:00:00.067) 0:00:46.053 *********** 2025-06-02 20:09:30.081058 | orchestrator | 2025-06-02 20:09:30.081064 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-02 20:09:30.081074 | orchestrator | Monday 02 June 2025 20:08:37 +0000 (0:00:00.069) 0:00:46.123 *********** 2025-06-02 20:09:30.081080 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:09:30.081087 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:09:30.081093 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:09:30.081099 | orchestrator | 2025-06-02 20:09:30.081105 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:09:30.081111 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-02 20:09:30.081119 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 20:09:30.081126 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 20:09:30.081132 | orchestrator | 2025-06-02 20:09:30.081138 | orchestrator | 2025-06-02 20:09:30.081144 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:09:30.081150 | orchestrator | Monday 02 June 2025 20:09:29 +0000 (0:00:51.684) 0:01:37.807 *********** 2025-06-02 20:09:30.081157 | orchestrator | =============================================================================== 2025-06-02 20:09:30.081163 | orchestrator | horizon : Restart horizon container ------------------------------------ 51.68s 2025-06-02 20:09:30.081169 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.38s 2025-06-02 20:09:30.081175 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.20s 2025-06-02 20:09:30.081181 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.14s 2025-06-02 20:09:30.081187 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.07s 2025-06-02 20:09:30.081193 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.01s 2025-06-02 20:09:30.081203 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.59s 2025-06-02 20:09:30.081210 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.54s 2025-06-02 20:09:30.081216 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.45s 2025-06-02 20:09:30.081222 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.12s 2025-06-02 20:09:30.081228 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.11s 2025-06-02 20:09:30.081234 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.87s 2025-06-02 20:09:30.081240 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-06-02 20:09:30.081246 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2025-06-02 20:09:30.081252 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.55s 2025-06-02 20:09:30.081258 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-06-02 20:09:30.081264 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-06-02 20:09:30.081270 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2025-06-02 20:09:30.081276 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2025-06-02 20:09:30.081282 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.48s 2025-06-02 20:09:30.081288 | orchestrator | 2025-06-02 20:09:30 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:30.081307 | orchestrator | 2025-06-02 20:09:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:33.117548 | orchestrator | 2025-06-02 20:09:33 | INFO  | Task e9c0ab87-da7c-4705-8f7a-95c4701d9f42 is in state SUCCESS 2025-06-02 20:09:33.118782 | orchestrator | 2025-06-02 20:09:33 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:33.118845 | orchestrator | 2025-06-02 20:09:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:36.172250 | orchestrator | 2025-06-02 20:09:36 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:09:36.172374 | orchestrator | 2025-06-02 20:09:36 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:36.172396 | orchestrator | 2025-06-02 20:09:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:39.213827 | orchestrator | 2025-06-02 20:09:39 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:09:39.214924 | orchestrator | 2025-06-02 20:09:39 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:39.215224 | orchestrator | 2025-06-02 20:09:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:42.251779 | orchestrator | 2025-06-02 20:09:42 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:09:42.252459 | orchestrator | 2025-06-02 20:09:42 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:42.252529 | orchestrator | 2025-06-02 20:09:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:45.291259 | orchestrator | 2025-06-02 20:09:45 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:09:45.291389 | orchestrator | 2025-06-02 20:09:45 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:45.291408 | orchestrator | 2025-06-02 20:09:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:48.328376 | orchestrator | 2025-06-02 20:09:48 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:09:48.328926 | orchestrator | 2025-06-02 20:09:48 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:48.329074 | orchestrator | 2025-06-02 20:09:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:51.374183 | orchestrator | 2025-06-02 20:09:51 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:09:51.376675 | orchestrator | 2025-06-02 20:09:51 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:51.376812 | orchestrator | 2025-06-02 20:09:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:54.433836 | orchestrator | 2025-06-02 20:09:54 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:09:54.435940 | orchestrator | 2025-06-02 20:09:54 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:54.436026 | orchestrator | 2025-06-02 20:09:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:09:57.478295 | orchestrator | 2025-06-02 20:09:57 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:09:57.481593 | orchestrator | 2025-06-02 20:09:57 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:09:57.481619 | orchestrator | 2025-06-02 20:09:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:00.521528 | orchestrator | 2025-06-02 20:10:00 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:10:00.524242 | orchestrator | 2025-06-02 20:10:00 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:10:00.524300 | orchestrator | 2025-06-02 20:10:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:03.567002 | orchestrator | 2025-06-02 20:10:03 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:10:03.568629 | orchestrator | 2025-06-02 20:10:03 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:10:03.568924 | orchestrator | 2025-06-02 20:10:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:06.614795 | orchestrator | 2025-06-02 20:10:06 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:10:06.615249 | orchestrator | 2025-06-02 20:10:06 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:10:06.615285 | orchestrator | 2025-06-02 20:10:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:09.656074 | orchestrator | 2025-06-02 20:10:09 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:10:09.657914 | orchestrator | 2025-06-02 20:10:09 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:10:09.657958 | orchestrator | 2025-06-02 20:10:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:12.701948 | orchestrator | 2025-06-02 20:10:12 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:10:12.703242 | orchestrator | 2025-06-02 20:10:12 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:10:12.703280 | orchestrator | 2025-06-02 20:10:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:15.747271 | orchestrator | 2025-06-02 20:10:15 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:10:15.749458 | orchestrator | 2025-06-02 20:10:15 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:10:15.749495 | orchestrator | 2025-06-02 20:10:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:18.802859 | orchestrator | 2025-06-02 20:10:18 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:10:18.804096 | orchestrator | 2025-06-02 20:10:18 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state STARTED 2025-06-02 20:10:18.804142 | orchestrator | 2025-06-02 20:10:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:21.856078 | orchestrator | 2025-06-02 20:10:21 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:10:21.860786 | orchestrator | 2025-06-02 20:10:21 | INFO  | Task 3dc7e772-9928-4352-8c43-c2de036f278e is in state SUCCESS 2025-06-02 20:10:21.862636 | orchestrator | 2025-06-02 20:10:21.862706 | orchestrator | 2025-06-02 20:10:21.862724 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-02 20:10:21.862735 | orchestrator | 2025-06-02 20:10:21.862744 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-02 20:10:21.862756 | orchestrator | Monday 02 June 2025 20:09:07 +0000 (0:00:00.154) 0:00:00.154 *********** 2025-06-02 20:10:21.862765 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-02 20:10:21.862775 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 20:10:21.862783 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 20:10:21.862796 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:10:21.862810 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 20:10:21.862819 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-02 20:10:21.862827 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-02 20:10:21.862837 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-02 20:10:21.862847 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-02 20:10:21.862857 | orchestrator | 2025-06-02 20:10:21.862866 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-02 20:10:21.862876 | orchestrator | Monday 02 June 2025 20:09:12 +0000 (0:00:04.210) 0:00:04.365 *********** 2025-06-02 20:10:21.862886 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 20:10:21.862897 | orchestrator | 2025-06-02 20:10:21.862906 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-02 20:10:21.862916 | orchestrator | Monday 02 June 2025 20:09:13 +0000 (0:00:00.944) 0:00:05.310 *********** 2025-06-02 20:10:21.862926 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-02 20:10:21.862935 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 20:10:21.862945 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 20:10:21.862955 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:10:21.862964 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 20:10:21.862973 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-02 20:10:21.862983 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-02 20:10:21.862993 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-02 20:10:21.863003 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-02 20:10:21.863013 | orchestrator | 2025-06-02 20:10:21.863022 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-02 20:10:21.863059 | orchestrator | Monday 02 June 2025 20:09:25 +0000 (0:00:12.594) 0:00:17.904 *********** 2025-06-02 20:10:21.863070 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-02 20:10:21.863080 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 20:10:21.863090 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 20:10:21.863100 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:10:21.863110 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 20:10:21.863120 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-02 20:10:21.863129 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-02 20:10:21.863138 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-02 20:10:21.863148 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-02 20:10:21.863157 | orchestrator | 2025-06-02 20:10:21.863167 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:10:21.863177 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:10:21.863188 | orchestrator | 2025-06-02 20:10:21.863197 | orchestrator | 2025-06-02 20:10:21.863205 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:10:21.863214 | orchestrator | Monday 02 June 2025 20:09:32 +0000 (0:00:06.420) 0:00:24.325 *********** 2025-06-02 20:10:21.863224 | orchestrator | =============================================================================== 2025-06-02 20:10:21.863234 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.59s 2025-06-02 20:10:21.863244 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.42s 2025-06-02 20:10:21.863264 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.21s 2025-06-02 20:10:21.863274 | orchestrator | Create share directory -------------------------------------------------- 0.94s 2025-06-02 20:10:21.863283 | orchestrator | 2025-06-02 20:10:21.863294 | orchestrator | 2025-06-02 20:10:21.863305 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:10:21.863316 | orchestrator | 2025-06-02 20:10:21.863338 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:10:21.863349 | orchestrator | Monday 02 June 2025 20:07:51 +0000 (0:00:00.246) 0:00:00.246 *********** 2025-06-02 20:10:21.863358 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:21.863368 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:21.863378 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:21.863388 | orchestrator | 2025-06-02 20:10:21.863397 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:10:21.863407 | orchestrator | Monday 02 June 2025 20:07:52 +0000 (0:00:00.264) 0:00:00.511 *********** 2025-06-02 20:10:21.863416 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 20:10:21.863427 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 20:10:21.863436 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 20:10:21.863445 | orchestrator | 2025-06-02 20:10:21.863455 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-02 20:10:21.863465 | orchestrator | 2025-06-02 20:10:21.863474 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:10:21.863484 | orchestrator | Monday 02 June 2025 20:07:52 +0000 (0:00:00.408) 0:00:00.919 *********** 2025-06-02 20:10:21.863493 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:10:21.863503 | orchestrator | 2025-06-02 20:10:21.863513 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-02 20:10:21.863522 | orchestrator | Monday 02 June 2025 20:07:53 +0000 (0:00:00.535) 0:00:01.454 *********** 2025-06-02 20:10:21.863543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.863558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.863582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.863986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.863999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864059 | orchestrator | 2025-06-02 20:10:21.864076 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-02 20:10:21.864083 | orchestrator | Monday 02 June 2025 20:07:54 +0000 (0:00:01.754) 0:00:03.208 *********** 2025-06-02 20:10:21.864095 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-02 20:10:21.864101 | orchestrator | 2025-06-02 20:10:21.864107 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-02 20:10:21.864112 | orchestrator | Monday 02 June 2025 20:07:55 +0000 (0:00:00.835) 0:00:04.044 *********** 2025-06-02 20:10:21.864118 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:21.864124 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:21.864130 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:21.864135 | orchestrator | 2025-06-02 20:10:21.864141 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-02 20:10:21.864147 | orchestrator | Monday 02 June 2025 20:07:56 +0000 (0:00:00.449) 0:00:04.494 *********** 2025-06-02 20:10:21.864157 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:10:21.864163 | orchestrator | 2025-06-02 20:10:21.864168 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:10:21.864174 | orchestrator | Monday 02 June 2025 20:07:56 +0000 (0:00:00.627) 0:00:05.121 *********** 2025-06-02 20:10:21.864180 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:10:21.864186 | orchestrator | 2025-06-02 20:10:21.864191 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-02 20:10:21.864197 | orchestrator | Monday 02 June 2025 20:07:57 +0000 (0:00:00.513) 0:00:05.635 *********** 2025-06-02 20:10:21.864203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.864210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.864221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.864233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864274 | orchestrator | 2025-06-02 20:10:21.864279 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-02 20:10:21.864285 | orchestrator | Monday 02 June 2025 20:08:00 +0000 (0:00:03.502) 0:00:09.137 *********** 2025-06-02 20:10:21.864299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:10:21.864310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.864316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:10:21.864322 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.864328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:10:21.864335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.864357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:10:21.864364 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:21.864370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:10:21.864376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.864382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:10:21.864388 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:21.864394 | orchestrator | 2025-06-02 20:10:21.864400 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-02 20:10:21.864406 | orchestrator | Monday 02 June 2025 20:08:01 +0000 (0:00:00.518) 0:00:09.656 *********** 2025-06-02 20:10:21.864412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:10:21.864441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.864448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:10:21.864454 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.864460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:10:21.864466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.864472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:10:21.864482 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:21.864495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 20:10:21.864502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.864508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 20:10:21.864514 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:21.864519 | orchestrator | 2025-06-02 20:10:21.864525 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-02 20:10:21.864531 | orchestrator | Monday 02 June 2025 20:08:01 +0000 (0:00:00.704) 0:00:10.360 *********** 2025-06-02 20:10:21.864537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.864544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.864562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.864568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864622 | orchestrator | 2025-06-02 20:10:21.864629 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-02 20:10:21.864636 | orchestrator | Monday 02 June 2025 20:08:05 +0000 (0:00:03.503) 0:00:13.864 *********** 2025-06-02 20:10:21.864642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.864649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.864657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.864689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.864702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.864709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.864746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.864771 | orchestrator | 2025-06-02 20:10:21.864778 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-02 20:10:21.864784 | orchestrator | Monday 02 June 2025 20:08:10 +0000 (0:00:05.197) 0:00:19.061 *********** 2025-06-02 20:10:21.864791 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:21.864798 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:21.864805 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:21.864811 | orchestrator | 2025-06-02 20:10:21.864818 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-02 20:10:21.864824 | orchestrator | Monday 02 June 2025 20:08:11 +0000 (0:00:01.312) 0:00:20.374 *********** 2025-06-02 20:10:21.864830 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.864840 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:21.864847 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:21.864853 | orchestrator | 2025-06-02 20:10:21.864860 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-02 20:10:21.864870 | orchestrator | Monday 02 June 2025 20:08:12 +0000 (0:00:00.532) 0:00:20.907 *********** 2025-06-02 20:10:21.864876 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.864883 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:21.864889 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:21.864896 | orchestrator | 2025-06-02 20:10:21.864903 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-02 20:10:21.864909 | orchestrator | Monday 02 June 2025 20:08:12 +0000 (0:00:00.389) 0:00:21.297 *********** 2025-06-02 20:10:21.864915 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.864922 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:21.864929 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:21.864935 | orchestrator | 2025-06-02 20:10:21.864942 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-02 20:10:21.864949 | orchestrator | Monday 02 June 2025 20:08:13 +0000 (0:00:00.253) 0:00:21.551 *********** 2025-06-02 20:10:21.864955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.864962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.864972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.864978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.864993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.865000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 20:10:21.865006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.865015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.865022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.865027 | orchestrator | 2025-06-02 20:10:21.865033 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:10:21.865039 | orchestrator | Monday 02 June 2025 20:08:15 +0000 (0:00:02.143) 0:00:23.694 *********** 2025-06-02 20:10:21.865045 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.865050 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:21.865056 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:21.865061 | orchestrator | 2025-06-02 20:10:21.865067 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-02 20:10:21.865073 | orchestrator | Monday 02 June 2025 20:08:15 +0000 (0:00:00.263) 0:00:23.958 *********** 2025-06-02 20:10:21.865078 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 20:10:21.865087 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 20:10:21.865093 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 20:10:21.865099 | orchestrator | 2025-06-02 20:10:21.865108 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-02 20:10:21.865114 | orchestrator | Monday 02 June 2025 20:08:17 +0000 (0:00:01.751) 0:00:25.709 *********** 2025-06-02 20:10:21.865119 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:10:21.865125 | orchestrator | 2025-06-02 20:10:21.865131 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-02 20:10:21.865136 | orchestrator | Monday 02 June 2025 20:08:17 +0000 (0:00:00.682) 0:00:26.392 *********** 2025-06-02 20:10:21.865142 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.865148 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:21.865153 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:21.865159 | orchestrator | 2025-06-02 20:10:21.865164 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-02 20:10:21.865170 | orchestrator | Monday 02 June 2025 20:08:18 +0000 (0:00:00.417) 0:00:26.810 *********** 2025-06-02 20:10:21.865176 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:10:21.865185 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 20:10:21.865191 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 20:10:21.865197 | orchestrator | 2025-06-02 20:10:21.865202 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-02 20:10:21.865208 | orchestrator | Monday 02 June 2025 20:08:19 +0000 (0:00:00.975) 0:00:27.785 *********** 2025-06-02 20:10:21.865214 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:21.865219 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:21.865225 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:21.865230 | orchestrator | 2025-06-02 20:10:21.865236 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-02 20:10:21.865242 | orchestrator | Monday 02 June 2025 20:08:19 +0000 (0:00:00.271) 0:00:28.056 *********** 2025-06-02 20:10:21.865248 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 20:10:21.865253 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 20:10:21.865259 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 20:10:21.865264 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 20:10:21.865270 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 20:10:21.865276 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 20:10:21.865281 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 20:10:21.865287 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 20:10:21.865293 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 20:10:21.865298 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 20:10:21.865304 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 20:10:21.865309 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 20:10:21.865315 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 20:10:21.865321 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 20:10:21.865326 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 20:10:21.865332 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:10:21.865338 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:10:21.865343 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:10:21.865349 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:10:21.865355 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:10:21.865360 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:10:21.865366 | orchestrator | 2025-06-02 20:10:21.865372 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-02 20:10:21.865377 | orchestrator | Monday 02 June 2025 20:08:28 +0000 (0:00:08.576) 0:00:36.633 *********** 2025-06-02 20:10:21.865383 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:10:21.865388 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:10:21.865394 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:10:21.865403 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:10:21.865412 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:10:21.865418 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:10:21.865424 | orchestrator | 2025-06-02 20:10:21.865429 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-02 20:10:21.865438 | orchestrator | Monday 02 June 2025 20:08:30 +0000 (0:00:02.504) 0:00:39.137 *********** 2025-06-02 20:10:21.865444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.865451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.865457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 20:10:21.865464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.865482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.865489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 20:10:21.865495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.865501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.865507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 20:10:21.865512 | orchestrator | 2025-06-02 20:10:21.865518 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:10:21.865524 | orchestrator | Monday 02 June 2025 20:08:32 +0000 (0:00:02.113) 0:00:41.251 *********** 2025-06-02 20:10:21.865530 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.865535 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:21.865545 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:21.865550 | orchestrator | 2025-06-02 20:10:21.865556 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-02 20:10:21.865562 | orchestrator | Monday 02 June 2025 20:08:33 +0000 (0:00:00.294) 0:00:41.546 *********** 2025-06-02 20:10:21.865567 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:21.865573 | orchestrator | 2025-06-02 20:10:21.865578 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-02 20:10:21.865584 | orchestrator | Monday 02 June 2025 20:08:35 +0000 (0:00:02.423) 0:00:43.969 *********** 2025-06-02 20:10:21.865589 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:21.865595 | orchestrator | 2025-06-02 20:10:21.865601 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-02 20:10:21.865606 | orchestrator | Monday 02 June 2025 20:08:38 +0000 (0:00:02.761) 0:00:46.731 *********** 2025-06-02 20:10:21.865612 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:21.865618 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:21.865623 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:21.865629 | orchestrator | 2025-06-02 20:10:21.865635 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-02 20:10:21.865643 | orchestrator | Monday 02 June 2025 20:08:39 +0000 (0:00:00.974) 0:00:47.705 *********** 2025-06-02 20:10:21.865649 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:21.865655 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:21.865660 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:21.865702 | orchestrator | 2025-06-02 20:10:21.865713 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-02 20:10:21.865720 | orchestrator | Monday 02 June 2025 20:08:39 +0000 (0:00:00.330) 0:00:48.036 *********** 2025-06-02 20:10:21.865725 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.865731 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:21.865737 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:21.865742 | orchestrator | 2025-06-02 20:10:21.865750 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-02 20:10:21.865760 | orchestrator | Monday 02 June 2025 20:08:40 +0000 (0:00:00.414) 0:00:48.451 *********** 2025-06-02 20:10:21.865771 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:21.865787 | orchestrator | 2025-06-02 20:10:21.865797 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-02 20:10:21.865807 | orchestrator | Monday 02 June 2025 20:08:53 +0000 (0:00:13.553) 0:01:02.004 *********** 2025-06-02 20:10:21.865818 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:21.865828 | orchestrator | 2025-06-02 20:10:21.865839 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 20:10:21.865849 | orchestrator | Monday 02 June 2025 20:09:03 +0000 (0:00:09.935) 0:01:11.939 *********** 2025-06-02 20:10:21.865860 | orchestrator | 2025-06-02 20:10:21.865868 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 20:10:21.865874 | orchestrator | Monday 02 June 2025 20:09:03 +0000 (0:00:00.261) 0:01:12.201 *********** 2025-06-02 20:10:21.865879 | orchestrator | 2025-06-02 20:10:21.865885 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 20:10:21.865891 | orchestrator | Monday 02 June 2025 20:09:03 +0000 (0:00:00.065) 0:01:12.267 *********** 2025-06-02 20:10:21.865896 | orchestrator | 2025-06-02 20:10:21.865902 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-02 20:10:21.865908 | orchestrator | Monday 02 June 2025 20:09:03 +0000 (0:00:00.061) 0:01:12.329 *********** 2025-06-02 20:10:21.865913 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:21.865919 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:21.865925 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:21.865931 | orchestrator | 2025-06-02 20:10:21.865936 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-02 20:10:21.865942 | orchestrator | Monday 02 June 2025 20:09:20 +0000 (0:00:16.646) 0:01:28.975 *********** 2025-06-02 20:10:21.865953 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:21.865959 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:21.865965 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:21.865970 | orchestrator | 2025-06-02 20:10:21.865976 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-02 20:10:21.865982 | orchestrator | Monday 02 June 2025 20:09:28 +0000 (0:00:07.619) 0:01:36.595 *********** 2025-06-02 20:10:21.865988 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:21.865993 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:10:21.865999 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:10:21.866005 | orchestrator | 2025-06-02 20:10:21.866011 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:10:21.866068 | orchestrator | Monday 02 June 2025 20:09:34 +0000 (0:00:06.072) 0:01:42.668 *********** 2025-06-02 20:10:21.866074 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:10:21.866080 | orchestrator | 2025-06-02 20:10:21.866086 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-02 20:10:21.866092 | orchestrator | Monday 02 June 2025 20:09:34 +0000 (0:00:00.729) 0:01:43.397 *********** 2025-06-02 20:10:21.866097 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:21.866103 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:10:21.866109 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:10:21.866114 | orchestrator | 2025-06-02 20:10:21.866120 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-02 20:10:21.866126 | orchestrator | Monday 02 June 2025 20:09:35 +0000 (0:00:00.771) 0:01:44.168 *********** 2025-06-02 20:10:21.866135 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:10:21.866146 | orchestrator | 2025-06-02 20:10:21.866155 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-02 20:10:21.866164 | orchestrator | Monday 02 June 2025 20:09:37 +0000 (0:00:01.859) 0:01:46.028 *********** 2025-06-02 20:10:21.866173 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-02 20:10:21.866182 | orchestrator | 2025-06-02 20:10:21.866190 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-02 20:10:21.866200 | orchestrator | Monday 02 June 2025 20:09:48 +0000 (0:00:10.819) 0:01:56.847 *********** 2025-06-02 20:10:21.866209 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-02 20:10:21.866218 | orchestrator | 2025-06-02 20:10:21.866227 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-02 20:10:21.866237 | orchestrator | Monday 02 June 2025 20:10:09 +0000 (0:00:20.878) 0:02:17.726 *********** 2025-06-02 20:10:21.866246 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-02 20:10:21.866255 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-02 20:10:21.866264 | orchestrator | 2025-06-02 20:10:21.866274 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-02 20:10:21.866284 | orchestrator | Monday 02 June 2025 20:10:15 +0000 (0:00:06.662) 0:02:24.388 *********** 2025-06-02 20:10:21.866293 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.866302 | orchestrator | 2025-06-02 20:10:21.866312 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-02 20:10:21.866321 | orchestrator | Monday 02 June 2025 20:10:16 +0000 (0:00:00.322) 0:02:24.710 *********** 2025-06-02 20:10:21.866335 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.866346 | orchestrator | 2025-06-02 20:10:21.866356 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-02 20:10:21.866367 | orchestrator | Monday 02 June 2025 20:10:16 +0000 (0:00:00.111) 0:02:24.822 *********** 2025-06-02 20:10:21.866378 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.866389 | orchestrator | 2025-06-02 20:10:21.866404 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-02 20:10:21.866410 | orchestrator | Monday 02 June 2025 20:10:16 +0000 (0:00:00.126) 0:02:24.948 *********** 2025-06-02 20:10:21.866422 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.866428 | orchestrator | 2025-06-02 20:10:21.866434 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-02 20:10:21.866439 | orchestrator | Monday 02 June 2025 20:10:16 +0000 (0:00:00.304) 0:02:25.252 *********** 2025-06-02 20:10:21.866445 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:10:21.866451 | orchestrator | 2025-06-02 20:10:21.866456 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 20:10:21.866462 | orchestrator | Monday 02 June 2025 20:10:20 +0000 (0:00:03.276) 0:02:28.528 *********** 2025-06-02 20:10:21.866467 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:10:21.866473 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:10:21.866479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:10:21.866484 | orchestrator | 2025-06-02 20:10:21.866490 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:10:21.866496 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-02 20:10:21.866503 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 20:10:21.866509 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 20:10:21.866515 | orchestrator | 2025-06-02 20:10:21.866521 | orchestrator | 2025-06-02 20:10:21.866526 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:10:21.866532 | orchestrator | Monday 02 June 2025 20:10:20 +0000 (0:00:00.610) 0:02:29.139 *********** 2025-06-02 20:10:21.866538 | orchestrator | =============================================================================== 2025-06-02 20:10:21.866543 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.88s 2025-06-02 20:10:21.866549 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 16.65s 2025-06-02 20:10:21.866555 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.55s 2025-06-02 20:10:21.866560 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.82s 2025-06-02 20:10:21.866566 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.94s 2025-06-02 20:10:21.866571 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.58s 2025-06-02 20:10:21.866577 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.62s 2025-06-02 20:10:21.866583 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.66s 2025-06-02 20:10:21.866589 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.07s 2025-06-02 20:10:21.866594 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.20s 2025-06-02 20:10:21.866600 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.50s 2025-06-02 20:10:21.866606 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.50s 2025-06-02 20:10:21.866611 | orchestrator | keystone : Creating default user role ----------------------------------- 3.28s 2025-06-02 20:10:21.866617 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.76s 2025-06-02 20:10:21.866622 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.50s 2025-06-02 20:10:21.866628 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.42s 2025-06-02 20:10:21.866635 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.14s 2025-06-02 20:10:21.866645 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.11s 2025-06-02 20:10:21.866655 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.86s 2025-06-02 20:10:21.866717 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.75s 2025-06-02 20:10:21.866730 | orchestrator | 2025-06-02 20:10:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:24.906103 | orchestrator | 2025-06-02 20:10:24 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:24.906761 | orchestrator | 2025-06-02 20:10:24 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:10:24.907763 | orchestrator | 2025-06-02 20:10:24 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:24.909413 | orchestrator | 2025-06-02 20:10:24 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:24.913464 | orchestrator | 2025-06-02 20:10:24 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:24.913543 | orchestrator | 2025-06-02 20:10:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:27.949247 | orchestrator | 2025-06-02 20:10:27 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:27.949340 | orchestrator | 2025-06-02 20:10:27 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state STARTED 2025-06-02 20:10:27.950111 | orchestrator | 2025-06-02 20:10:27 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:27.951043 | orchestrator | 2025-06-02 20:10:27 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:27.951871 | orchestrator | 2025-06-02 20:10:27 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:27.951908 | orchestrator | 2025-06-02 20:10:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:30.997765 | orchestrator | 2025-06-02 20:10:30 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:30.999787 | orchestrator | 2025-06-02 20:10:30 | INFO  | Task 8d61b1a9-3c60-4593-87be-92c8e90b33c8 is in state SUCCESS 2025-06-02 20:10:31.001341 | orchestrator | 2025-06-02 20:10:30 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:31.002110 | orchestrator | 2025-06-02 20:10:31 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:31.003355 | orchestrator | 2025-06-02 20:10:31 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:31.004159 | orchestrator | 2025-06-02 20:10:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:34.057361 | orchestrator | 2025-06-02 20:10:34 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:10:34.060790 | orchestrator | 2025-06-02 20:10:34 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:34.062983 | orchestrator | 2025-06-02 20:10:34 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:34.065532 | orchestrator | 2025-06-02 20:10:34 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:34.068997 | orchestrator | 2025-06-02 20:10:34 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:34.069528 | orchestrator | 2025-06-02 20:10:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:37.117964 | orchestrator | 2025-06-02 20:10:37 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:10:37.119124 | orchestrator | 2025-06-02 20:10:37 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:37.123052 | orchestrator | 2025-06-02 20:10:37 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:37.123934 | orchestrator | 2025-06-02 20:10:37 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:37.124626 | orchestrator | 2025-06-02 20:10:37 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:37.125319 | orchestrator | 2025-06-02 20:10:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:40.175216 | orchestrator | 2025-06-02 20:10:40 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:10:40.177994 | orchestrator | 2025-06-02 20:10:40 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:40.178991 | orchestrator | 2025-06-02 20:10:40 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:40.180888 | orchestrator | 2025-06-02 20:10:40 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:40.182433 | orchestrator | 2025-06-02 20:10:40 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:40.182482 | orchestrator | 2025-06-02 20:10:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:43.229426 | orchestrator | 2025-06-02 20:10:43 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:10:43.231349 | orchestrator | 2025-06-02 20:10:43 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:43.234639 | orchestrator | 2025-06-02 20:10:43 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:43.236024 | orchestrator | 2025-06-02 20:10:43 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:43.238076 | orchestrator | 2025-06-02 20:10:43 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:43.238174 | orchestrator | 2025-06-02 20:10:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:46.283159 | orchestrator | 2025-06-02 20:10:46 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:10:46.286743 | orchestrator | 2025-06-02 20:10:46 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:46.289805 | orchestrator | 2025-06-02 20:10:46 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:46.289877 | orchestrator | 2025-06-02 20:10:46 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:46.290212 | orchestrator | 2025-06-02 20:10:46 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:46.290255 | orchestrator | 2025-06-02 20:10:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:49.339903 | orchestrator | 2025-06-02 20:10:49 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:10:49.343268 | orchestrator | 2025-06-02 20:10:49 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:49.343865 | orchestrator | 2025-06-02 20:10:49 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:49.345542 | orchestrator | 2025-06-02 20:10:49 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:49.347016 | orchestrator | 2025-06-02 20:10:49 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:49.347051 | orchestrator | 2025-06-02 20:10:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:52.391115 | orchestrator | 2025-06-02 20:10:52 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:10:52.395992 | orchestrator | 2025-06-02 20:10:52 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:52.397148 | orchestrator | 2025-06-02 20:10:52 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:52.401644 | orchestrator | 2025-06-02 20:10:52 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:52.404208 | orchestrator | 2025-06-02 20:10:52 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:52.404395 | orchestrator | 2025-06-02 20:10:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:55.466235 | orchestrator | 2025-06-02 20:10:55 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:10:55.467088 | orchestrator | 2025-06-02 20:10:55 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:55.469117 | orchestrator | 2025-06-02 20:10:55 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:55.470771 | orchestrator | 2025-06-02 20:10:55 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:55.472430 | orchestrator | 2025-06-02 20:10:55 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:55.472471 | orchestrator | 2025-06-02 20:10:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:10:58.524566 | orchestrator | 2025-06-02 20:10:58 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:10:58.527008 | orchestrator | 2025-06-02 20:10:58 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:10:58.530525 | orchestrator | 2025-06-02 20:10:58 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:10:58.533727 | orchestrator | 2025-06-02 20:10:58 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:10:58.536296 | orchestrator | 2025-06-02 20:10:58 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:10:58.536348 | orchestrator | 2025-06-02 20:10:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:01.579980 | orchestrator | 2025-06-02 20:11:01 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:01.580068 | orchestrator | 2025-06-02 20:11:01 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:01.580084 | orchestrator | 2025-06-02 20:11:01 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:01.580095 | orchestrator | 2025-06-02 20:11:01 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:11:01.580457 | orchestrator | 2025-06-02 20:11:01 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:01.580483 | orchestrator | 2025-06-02 20:11:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:04.610955 | orchestrator | 2025-06-02 20:11:04 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:04.611101 | orchestrator | 2025-06-02 20:11:04 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:04.612931 | orchestrator | 2025-06-02 20:11:04 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:04.617130 | orchestrator | 2025-06-02 20:11:04 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state STARTED 2025-06-02 20:11:04.617905 | orchestrator | 2025-06-02 20:11:04 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:04.617972 | orchestrator | 2025-06-02 20:11:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:07.644274 | orchestrator | 2025-06-02 20:11:07 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:07.644386 | orchestrator | 2025-06-02 20:11:07 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:07.645065 | orchestrator | 2025-06-02 20:11:07 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:07.645967 | orchestrator | 2025-06-02 20:11:07 | INFO  | Task 5d5cb4c2-9d21-425d-9aac-b40a8a5c37ce is in state SUCCESS 2025-06-02 20:11:07.646310 | orchestrator | 2025-06-02 20:11:07.646337 | orchestrator | 2025-06-02 20:11:07.646350 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-02 20:11:07.646363 | orchestrator | 2025-06-02 20:11:07.646374 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-02 20:11:07.646385 | orchestrator | Monday 02 June 2025 20:09:36 +0000 (0:00:00.240) 0:00:00.240 *********** 2025-06-02 20:11:07.646396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-02 20:11:07.646408 | orchestrator | 2025-06-02 20:11:07.646419 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-02 20:11:07.646430 | orchestrator | Monday 02 June 2025 20:09:36 +0000 (0:00:00.233) 0:00:00.473 *********** 2025-06-02 20:11:07.646441 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-02 20:11:07.646453 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-02 20:11:07.646465 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-02 20:11:07.646476 | orchestrator | 2025-06-02 20:11:07.646487 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-02 20:11:07.646498 | orchestrator | Monday 02 June 2025 20:09:37 +0000 (0:00:01.167) 0:00:01.641 *********** 2025-06-02 20:11:07.646508 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-02 20:11:07.646519 | orchestrator | 2025-06-02 20:11:07.646530 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-02 20:11:07.646541 | orchestrator | Monday 02 June 2025 20:09:38 +0000 (0:00:01.113) 0:00:02.754 *********** 2025-06-02 20:11:07.646552 | orchestrator | changed: [testbed-manager] 2025-06-02 20:11:07.646562 | orchestrator | 2025-06-02 20:11:07.646573 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-02 20:11:07.646584 | orchestrator | Monday 02 June 2025 20:09:39 +0000 (0:00:00.963) 0:00:03.717 *********** 2025-06-02 20:11:07.646595 | orchestrator | changed: [testbed-manager] 2025-06-02 20:11:07.646605 | orchestrator | 2025-06-02 20:11:07.646616 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-02 20:11:07.646627 | orchestrator | Monday 02 June 2025 20:09:40 +0000 (0:00:00.881) 0:00:04.599 *********** 2025-06-02 20:11:07.646638 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-02 20:11:07.646676 | orchestrator | ok: [testbed-manager] 2025-06-02 20:11:07.646691 | orchestrator | 2025-06-02 20:11:07.646702 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-02 20:11:07.646713 | orchestrator | Monday 02 June 2025 20:10:22 +0000 (0:00:41.349) 0:00:45.949 *********** 2025-06-02 20:11:07.646723 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-02 20:11:07.646734 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-02 20:11:07.646745 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-02 20:11:07.646756 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-02 20:11:07.646767 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-02 20:11:07.646777 | orchestrator | 2025-06-02 20:11:07.646788 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-02 20:11:07.646828 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:04.021) 0:00:49.971 *********** 2025-06-02 20:11:07.646839 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-02 20:11:07.646850 | orchestrator | 2025-06-02 20:11:07.646860 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-02 20:11:07.646871 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.364) 0:00:50.335 *********** 2025-06-02 20:11:07.646882 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:11:07.646892 | orchestrator | 2025-06-02 20:11:07.646903 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-02 20:11:07.646913 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.094) 0:00:50.429 *********** 2025-06-02 20:11:07.646937 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:11:07.646948 | orchestrator | 2025-06-02 20:11:07.646959 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-02 20:11:07.646970 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.214) 0:00:50.644 *********** 2025-06-02 20:11:07.646980 | orchestrator | changed: [testbed-manager] 2025-06-02 20:11:07.646994 | orchestrator | 2025-06-02 20:11:07.647232 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-02 20:11:07.647244 | orchestrator | Monday 02 June 2025 20:10:28 +0000 (0:00:01.410) 0:00:52.054 *********** 2025-06-02 20:11:07.647255 | orchestrator | changed: [testbed-manager] 2025-06-02 20:11:07.647266 | orchestrator | 2025-06-02 20:11:07.647277 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-02 20:11:07.647288 | orchestrator | Monday 02 June 2025 20:10:28 +0000 (0:00:00.647) 0:00:52.702 *********** 2025-06-02 20:11:07.647298 | orchestrator | changed: [testbed-manager] 2025-06-02 20:11:07.647309 | orchestrator | 2025-06-02 20:11:07.647320 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-02 20:11:07.647331 | orchestrator | Monday 02 June 2025 20:10:29 +0000 (0:00:00.491) 0:00:53.194 *********** 2025-06-02 20:11:07.647342 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-02 20:11:07.647353 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-02 20:11:07.647365 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-02 20:11:07.647376 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-02 20:11:07.647386 | orchestrator | 2025-06-02 20:11:07.647398 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:11:07.647409 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:11:07.647421 | orchestrator | 2025-06-02 20:11:07.647432 | orchestrator | 2025-06-02 20:11:07.647458 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:11:07.647470 | orchestrator | Monday 02 June 2025 20:10:30 +0000 (0:00:01.228) 0:00:54.423 *********** 2025-06-02 20:11:07.647481 | orchestrator | =============================================================================== 2025-06-02 20:11:07.647492 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.35s 2025-06-02 20:11:07.647503 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.02s 2025-06-02 20:11:07.647513 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.41s 2025-06-02 20:11:07.647524 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.23s 2025-06-02 20:11:07.647535 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.17s 2025-06-02 20:11:07.647546 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.11s 2025-06-02 20:11:07.647557 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2025-06-02 20:11:07.647568 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2025-06-02 20:11:07.647579 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.65s 2025-06-02 20:11:07.647602 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.49s 2025-06-02 20:11:07.647613 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.36s 2025-06-02 20:11:07.647624 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-06-02 20:11:07.647634 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.21s 2025-06-02 20:11:07.647645 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.09s 2025-06-02 20:11:07.647691 | orchestrator | 2025-06-02 20:11:07.647704 | orchestrator | 2025-06-02 20:11:07.647716 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-02 20:11:07.647727 | orchestrator | 2025-06-02 20:11:07.647738 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-02 20:11:07.647749 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.078) 0:00:00.078 *********** 2025-06-02 20:11:07.647760 | orchestrator | changed: [localhost] 2025-06-02 20:11:07.647771 | orchestrator | 2025-06-02 20:11:07.647781 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-02 20:11:07.647792 | orchestrator | Monday 02 June 2025 20:10:27 +0000 (0:00:01.036) 0:00:01.114 *********** 2025-06-02 20:11:07.647802 | orchestrator | changed: [localhost] 2025-06-02 20:11:07.647813 | orchestrator | 2025-06-02 20:11:07.647824 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-02 20:11:07.647835 | orchestrator | Monday 02 June 2025 20:10:55 +0000 (0:00:28.530) 0:00:29.644 *********** 2025-06-02 20:11:07.647845 | orchestrator | changed: [localhost] 2025-06-02 20:11:07.647856 | orchestrator | 2025-06-02 20:11:07.647867 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:11:07.647878 | orchestrator | 2025-06-02 20:11:07.647889 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:11:07.647899 | orchestrator | Monday 02 June 2025 20:11:04 +0000 (0:00:09.208) 0:00:38.853 *********** 2025-06-02 20:11:07.647910 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:11:07.647921 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:11:07.647932 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:11:07.647942 | orchestrator | 2025-06-02 20:11:07.647953 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:11:07.647964 | orchestrator | Monday 02 June 2025 20:11:05 +0000 (0:00:00.299) 0:00:39.152 *********** 2025-06-02 20:11:07.647975 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-02 20:11:07.647986 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-02 20:11:07.647997 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-02 20:11:07.648008 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-02 20:11:07.648019 | orchestrator | 2025-06-02 20:11:07.648039 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-02 20:11:07.648050 | orchestrator | skipping: no hosts matched 2025-06-02 20:11:07.648061 | orchestrator | 2025-06-02 20:11:07.648072 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:11:07.648083 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:11:07.648096 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:11:07.648107 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:11:07.648118 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:11:07.648129 | orchestrator | 2025-06-02 20:11:07.648140 | orchestrator | 2025-06-02 20:11:07.648152 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:11:07.648163 | orchestrator | Monday 02 June 2025 20:11:05 +0000 (0:00:00.397) 0:00:39.550 *********** 2025-06-02 20:11:07.648181 | orchestrator | =============================================================================== 2025-06-02 20:11:07.648192 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 28.53s 2025-06-02 20:11:07.648202 | orchestrator | Download ironic-agent kernel -------------------------------------------- 9.21s 2025-06-02 20:11:07.648213 | orchestrator | Ensure the destination directory exists --------------------------------- 1.04s 2025-06-02 20:11:07.648224 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-06-02 20:11:07.648243 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-06-02 20:11:07.648255 | orchestrator | 2025-06-02 20:11:07 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:07.648267 | orchestrator | 2025-06-02 20:11:07 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:07.648278 | orchestrator | 2025-06-02 20:11:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:10.672084 | orchestrator | 2025-06-02 20:11:10 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:10.673397 | orchestrator | 2025-06-02 20:11:10 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:10.676063 | orchestrator | 2025-06-02 20:11:10 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:10.676821 | orchestrator | 2025-06-02 20:11:10 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:10.677837 | orchestrator | 2025-06-02 20:11:10 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:10.677912 | orchestrator | 2025-06-02 20:11:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:13.710002 | orchestrator | 2025-06-02 20:11:13 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:13.710284 | orchestrator | 2025-06-02 20:11:13 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:13.710757 | orchestrator | 2025-06-02 20:11:13 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:13.711728 | orchestrator | 2025-06-02 20:11:13 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:13.712105 | orchestrator | 2025-06-02 20:11:13 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:13.712237 | orchestrator | 2025-06-02 20:11:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:16.750545 | orchestrator | 2025-06-02 20:11:16 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:16.754123 | orchestrator | 2025-06-02 20:11:16 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:16.754341 | orchestrator | 2025-06-02 20:11:16 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:16.755518 | orchestrator | 2025-06-02 20:11:16 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:16.756389 | orchestrator | 2025-06-02 20:11:16 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:16.756410 | orchestrator | 2025-06-02 20:11:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:19.788493 | orchestrator | 2025-06-02 20:11:19 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:19.788592 | orchestrator | 2025-06-02 20:11:19 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:19.789064 | orchestrator | 2025-06-02 20:11:19 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:19.791955 | orchestrator | 2025-06-02 20:11:19 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:19.792295 | orchestrator | 2025-06-02 20:11:19 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:19.792323 | orchestrator | 2025-06-02 20:11:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:22.817135 | orchestrator | 2025-06-02 20:11:22 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:22.817308 | orchestrator | 2025-06-02 20:11:22 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:22.818352 | orchestrator | 2025-06-02 20:11:22 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:22.819228 | orchestrator | 2025-06-02 20:11:22 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:22.819787 | orchestrator | 2025-06-02 20:11:22 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:22.821377 | orchestrator | 2025-06-02 20:11:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:25.847492 | orchestrator | 2025-06-02 20:11:25 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:25.848043 | orchestrator | 2025-06-02 20:11:25 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:25.848509 | orchestrator | 2025-06-02 20:11:25 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:25.848863 | orchestrator | 2025-06-02 20:11:25 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:25.850889 | orchestrator | 2025-06-02 20:11:25 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:25.850952 | orchestrator | 2025-06-02 20:11:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:28.886007 | orchestrator | 2025-06-02 20:11:28 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:28.886776 | orchestrator | 2025-06-02 20:11:28 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:28.887359 | orchestrator | 2025-06-02 20:11:28 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:28.888105 | orchestrator | 2025-06-02 20:11:28 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:28.888840 | orchestrator | 2025-06-02 20:11:28 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:28.888872 | orchestrator | 2025-06-02 20:11:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:31.913164 | orchestrator | 2025-06-02 20:11:31 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:31.913607 | orchestrator | 2025-06-02 20:11:31 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:31.914439 | orchestrator | 2025-06-02 20:11:31 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:31.914929 | orchestrator | 2025-06-02 20:11:31 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:31.915571 | orchestrator | 2025-06-02 20:11:31 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:31.915606 | orchestrator | 2025-06-02 20:11:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:34.940985 | orchestrator | 2025-06-02 20:11:34 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:34.941233 | orchestrator | 2025-06-02 20:11:34 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:34.941872 | orchestrator | 2025-06-02 20:11:34 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:34.942549 | orchestrator | 2025-06-02 20:11:34 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:34.943341 | orchestrator | 2025-06-02 20:11:34 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:34.943372 | orchestrator | 2025-06-02 20:11:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:37.968694 | orchestrator | 2025-06-02 20:11:37 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:37.968849 | orchestrator | 2025-06-02 20:11:37 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:37.969185 | orchestrator | 2025-06-02 20:11:37 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:37.969814 | orchestrator | 2025-06-02 20:11:37 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:37.970430 | orchestrator | 2025-06-02 20:11:37 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:37.970455 | orchestrator | 2025-06-02 20:11:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:40.993529 | orchestrator | 2025-06-02 20:11:40 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:40.993875 | orchestrator | 2025-06-02 20:11:40 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:40.994369 | orchestrator | 2025-06-02 20:11:40 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:40.995120 | orchestrator | 2025-06-02 20:11:40 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:40.996235 | orchestrator | 2025-06-02 20:11:40 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:40.996281 | orchestrator | 2025-06-02 20:11:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:44.025952 | orchestrator | 2025-06-02 20:11:44 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:44.026227 | orchestrator | 2025-06-02 20:11:44 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:44.027081 | orchestrator | 2025-06-02 20:11:44 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:44.028314 | orchestrator | 2025-06-02 20:11:44 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:44.028807 | orchestrator | 2025-06-02 20:11:44 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:44.028818 | orchestrator | 2025-06-02 20:11:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:47.055451 | orchestrator | 2025-06-02 20:11:47 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:47.055549 | orchestrator | 2025-06-02 20:11:47 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:47.056326 | orchestrator | 2025-06-02 20:11:47 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:47.057040 | orchestrator | 2025-06-02 20:11:47 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:47.057619 | orchestrator | 2025-06-02 20:11:47 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:47.057713 | orchestrator | 2025-06-02 20:11:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:50.085923 | orchestrator | 2025-06-02 20:11:50 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:50.086097 | orchestrator | 2025-06-02 20:11:50 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:50.086969 | orchestrator | 2025-06-02 20:11:50 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:50.087370 | orchestrator | 2025-06-02 20:11:50 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:50.088198 | orchestrator | 2025-06-02 20:11:50 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:50.088232 | orchestrator | 2025-06-02 20:11:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:53.117366 | orchestrator | 2025-06-02 20:11:53 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:53.118235 | orchestrator | 2025-06-02 20:11:53 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:53.118551 | orchestrator | 2025-06-02 20:11:53 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:53.119861 | orchestrator | 2025-06-02 20:11:53 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:53.120507 | orchestrator | 2025-06-02 20:11:53 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:53.120530 | orchestrator | 2025-06-02 20:11:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:56.151691 | orchestrator | 2025-06-02 20:11:56 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:56.152520 | orchestrator | 2025-06-02 20:11:56 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:56.152566 | orchestrator | 2025-06-02 20:11:56 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:56.152934 | orchestrator | 2025-06-02 20:11:56 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:56.153904 | orchestrator | 2025-06-02 20:11:56 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:56.153981 | orchestrator | 2025-06-02 20:11:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:11:59.187984 | orchestrator | 2025-06-02 20:11:59 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state STARTED 2025-06-02 20:11:59.193087 | orchestrator | 2025-06-02 20:11:59 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:11:59.194814 | orchestrator | 2025-06-02 20:11:59 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:11:59.194870 | orchestrator | 2025-06-02 20:11:59 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:11:59.195826 | orchestrator | 2025-06-02 20:11:59 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:11:59.195856 | orchestrator | 2025-06-02 20:11:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:02.224258 | orchestrator | 2025-06-02 20:12:02 | INFO  | Task fd0258ec-2bd9-4404-aaa9-1d15ffa2d0dc is in state SUCCESS 2025-06-02 20:12:02.224909 | orchestrator | 2025-06-02 20:12:02 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:12:02.225698 | orchestrator | 2025-06-02 20:12:02 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:02.226219 | orchestrator | 2025-06-02 20:12:02 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:12:02.227043 | orchestrator | 2025-06-02 20:12:02 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:02.227078 | orchestrator | 2025-06-02 20:12:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:05.254968 | orchestrator | 2025-06-02 20:12:05 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:12:05.257762 | orchestrator | 2025-06-02 20:12:05 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:05.261156 | orchestrator | 2025-06-02 20:12:05 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:12:05.263482 | orchestrator | 2025-06-02 20:12:05 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:05.263531 | orchestrator | 2025-06-02 20:12:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:08.294375 | orchestrator | 2025-06-02 20:12:08 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:12:08.294592 | orchestrator | 2025-06-02 20:12:08 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:08.297796 | orchestrator | 2025-06-02 20:12:08 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:12:08.298391 | orchestrator | 2025-06-02 20:12:08 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:08.298511 | orchestrator | 2025-06-02 20:12:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:11.340096 | orchestrator | 2025-06-02 20:12:11 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:12:11.341082 | orchestrator | 2025-06-02 20:12:11 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:11.341470 | orchestrator | 2025-06-02 20:12:11 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:12:11.344307 | orchestrator | 2025-06-02 20:12:11 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:11.344348 | orchestrator | 2025-06-02 20:12:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:14.379052 | orchestrator | 2025-06-02 20:12:14 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:12:14.380353 | orchestrator | 2025-06-02 20:12:14 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:14.383382 | orchestrator | 2025-06-02 20:12:14 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:12:14.384133 | orchestrator | 2025-06-02 20:12:14 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:14.384163 | orchestrator | 2025-06-02 20:12:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:17.420357 | orchestrator | 2025-06-02 20:12:17 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:12:17.421107 | orchestrator | 2025-06-02 20:12:17 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:17.423006 | orchestrator | 2025-06-02 20:12:17 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:12:17.423165 | orchestrator | 2025-06-02 20:12:17 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:17.424530 | orchestrator | 2025-06-02 20:12:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:20.453677 | orchestrator | 2025-06-02 20:12:20 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:12:20.455038 | orchestrator | 2025-06-02 20:12:20 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:20.457816 | orchestrator | 2025-06-02 20:12:20 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:12:20.459962 | orchestrator | 2025-06-02 20:12:20 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:20.460892 | orchestrator | 2025-06-02 20:12:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:23.498364 | orchestrator | 2025-06-02 20:12:23 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:12:23.498545 | orchestrator | 2025-06-02 20:12:23 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:23.500877 | orchestrator | 2025-06-02 20:12:23 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state STARTED 2025-06-02 20:12:23.500943 | orchestrator | 2025-06-02 20:12:23 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:23.500965 | orchestrator | 2025-06-02 20:12:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:26.542395 | orchestrator | 2025-06-02 20:12:26 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state STARTED 2025-06-02 20:12:26.544395 | orchestrator | 2025-06-02 20:12:26 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:26.545416 | orchestrator | 2025-06-02 20:12:26 | INFO  | Task 56647fb5-f948-42ab-bdce-48a91d1257a1 is in state SUCCESS 2025-06-02 20:12:26.547088 | orchestrator | 2025-06-02 20:12:26.547136 | orchestrator | 2025-06-02 20:12:26.547149 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-02 20:12:26.547162 | orchestrator | 2025-06-02 20:12:26.547173 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-02 20:12:26.547185 | orchestrator | Monday 02 June 2025 20:10:35 +0000 (0:00:00.273) 0:00:00.274 *********** 2025-06-02 20:12:26.547196 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:26.547209 | orchestrator | 2025-06-02 20:12:26.547220 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-02 20:12:26.547231 | orchestrator | Monday 02 June 2025 20:10:37 +0000 (0:00:01.933) 0:00:02.207 *********** 2025-06-02 20:12:26.547241 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:26.547252 | orchestrator | 2025-06-02 20:12:26.547263 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-02 20:12:26.547274 | orchestrator | Monday 02 June 2025 20:10:38 +0000 (0:00:01.098) 0:00:03.305 *********** 2025-06-02 20:12:26.547285 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:26.547296 | orchestrator | 2025-06-02 20:12:26.547307 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-02 20:12:26.547317 | orchestrator | Monday 02 June 2025 20:10:39 +0000 (0:00:01.107) 0:00:04.412 *********** 2025-06-02 20:12:26.547328 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:26.547339 | orchestrator | 2025-06-02 20:12:26.547350 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-02 20:12:26.547361 | orchestrator | Monday 02 June 2025 20:10:40 +0000 (0:00:01.152) 0:00:05.565 *********** 2025-06-02 20:12:26.547372 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:26.547383 | orchestrator | 2025-06-02 20:12:26.547393 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-02 20:12:26.547404 | orchestrator | Monday 02 June 2025 20:10:41 +0000 (0:00:01.063) 0:00:06.629 *********** 2025-06-02 20:12:26.547415 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:26.547427 | orchestrator | 2025-06-02 20:12:26.547445 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-02 20:12:26.547465 | orchestrator | Monday 02 June 2025 20:10:42 +0000 (0:00:01.029) 0:00:07.659 *********** 2025-06-02 20:12:26.547513 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:26.547533 | orchestrator | 2025-06-02 20:12:26.547550 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-02 20:12:26.547567 | orchestrator | Monday 02 June 2025 20:10:44 +0000 (0:00:02.091) 0:00:09.750 *********** 2025-06-02 20:12:26.547583 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:26.547630 | orchestrator | 2025-06-02 20:12:26.547651 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-02 20:12:26.547686 | orchestrator | Monday 02 June 2025 20:10:45 +0000 (0:00:01.145) 0:00:10.895 *********** 2025-06-02 20:12:26.547703 | orchestrator | changed: [testbed-manager] 2025-06-02 20:12:26.547719 | orchestrator | 2025-06-02 20:12:26.547738 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-02 20:12:26.547755 | orchestrator | Monday 02 June 2025 20:11:37 +0000 (0:00:51.718) 0:01:02.614 *********** 2025-06-02 20:12:26.547772 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:12:26.547789 | orchestrator | 2025-06-02 20:12:26.547804 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 20:12:26.547821 | orchestrator | 2025-06-02 20:12:26.547836 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 20:12:26.547853 | orchestrator | Monday 02 June 2025 20:11:37 +0000 (0:00:00.127) 0:01:02.742 *********** 2025-06-02 20:12:26.547870 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:26.547887 | orchestrator | 2025-06-02 20:12:26.547905 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 20:12:26.547924 | orchestrator | 2025-06-02 20:12:26.547944 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 20:12:26.547962 | orchestrator | Monday 02 June 2025 20:11:49 +0000 (0:00:11.549) 0:01:14.292 *********** 2025-06-02 20:12:26.547980 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:26.547991 | orchestrator | 2025-06-02 20:12:26.548002 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 20:12:26.548013 | orchestrator | 2025-06-02 20:12:26.548023 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 20:12:26.548034 | orchestrator | Monday 02 June 2025 20:11:50 +0000 (0:00:01.208) 0:01:15.500 *********** 2025-06-02 20:12:26.548044 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:26.548055 | orchestrator | 2025-06-02 20:12:26.548066 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:12:26.548078 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 20:12:26.548091 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:12:26.548102 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:12:26.548113 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:12:26.548123 | orchestrator | 2025-06-02 20:12:26.548134 | orchestrator | 2025-06-02 20:12:26.548145 | orchestrator | 2025-06-02 20:12:26.548155 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:12:26.548166 | orchestrator | Monday 02 June 2025 20:12:01 +0000 (0:00:11.250) 0:01:26.751 *********** 2025-06-02 20:12:26.548177 | orchestrator | =============================================================================== 2025-06-02 20:12:26.548187 | orchestrator | Create admin user ------------------------------------------------------ 51.72s 2025-06-02 20:12:26.548198 | orchestrator | Restart ceph manager service ------------------------------------------- 24.01s 2025-06-02 20:12:26.548225 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2025-06-02 20:12:26.548236 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.93s 2025-06-02 20:12:26.548259 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.15s 2025-06-02 20:12:26.548270 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.15s 2025-06-02 20:12:26.548281 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.11s 2025-06-02 20:12:26.548291 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.10s 2025-06-02 20:12:26.548302 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2025-06-02 20:12:26.548313 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.03s 2025-06-02 20:12:26.548324 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2025-06-02 20:12:26.548334 | orchestrator | 2025-06-02 20:12:26.548345 | orchestrator | 2025-06-02 20:12:26.548355 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:12:26.548371 | orchestrator | 2025-06-02 20:12:26.548389 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:12:26.548416 | orchestrator | Monday 02 June 2025 20:11:12 +0000 (0:00:00.373) 0:00:00.373 *********** 2025-06-02 20:12:26.548438 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:26.548454 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:26.548470 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:26.548487 | orchestrator | 2025-06-02 20:12:26.548503 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:12:26.548523 | orchestrator | Monday 02 June 2025 20:11:13 +0000 (0:00:00.509) 0:00:00.883 *********** 2025-06-02 20:12:26.548542 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-02 20:12:26.548560 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-02 20:12:26.548579 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-02 20:12:26.548595 | orchestrator | 2025-06-02 20:12:26.548697 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-02 20:12:26.548709 | orchestrator | 2025-06-02 20:12:26.548720 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 20:12:26.548730 | orchestrator | Monday 02 June 2025 20:11:13 +0000 (0:00:00.644) 0:00:01.528 *********** 2025-06-02 20:12:26.548741 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:12:26.548753 | orchestrator | 2025-06-02 20:12:26.548764 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-02 20:12:26.548784 | orchestrator | Monday 02 June 2025 20:11:15 +0000 (0:00:01.392) 0:00:02.920 *********** 2025-06-02 20:12:26.548795 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-02 20:12:26.548806 | orchestrator | 2025-06-02 20:12:26.548816 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-02 20:12:26.548827 | orchestrator | Monday 02 June 2025 20:11:19 +0000 (0:00:04.180) 0:00:07.100 *********** 2025-06-02 20:12:26.548837 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-02 20:12:26.548848 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-02 20:12:26.548859 | orchestrator | 2025-06-02 20:12:26.548870 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-02 20:12:26.548881 | orchestrator | Monday 02 June 2025 20:11:26 +0000 (0:00:06.768) 0:00:13.869 *********** 2025-06-02 20:12:26.548891 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:12:26.548902 | orchestrator | 2025-06-02 20:12:26.548913 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-02 20:12:26.548923 | orchestrator | Monday 02 June 2025 20:11:29 +0000 (0:00:03.385) 0:00:17.254 *********** 2025-06-02 20:12:26.548934 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:12:26.548944 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-02 20:12:26.548965 | orchestrator | 2025-06-02 20:12:26.548976 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-02 20:12:26.548986 | orchestrator | Monday 02 June 2025 20:11:33 +0000 (0:00:04.429) 0:00:21.683 *********** 2025-06-02 20:12:26.548997 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:12:26.549008 | orchestrator | 2025-06-02 20:12:26.549018 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-02 20:12:26.549029 | orchestrator | Monday 02 June 2025 20:11:37 +0000 (0:00:03.446) 0:00:25.129 *********** 2025-06-02 20:12:26.549040 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-02 20:12:26.549050 | orchestrator | 2025-06-02 20:12:26.549061 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 20:12:26.549071 | orchestrator | Monday 02 June 2025 20:11:41 +0000 (0:00:03.921) 0:00:29.051 *********** 2025-06-02 20:12:26.549082 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:26.549092 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:26.549103 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:26.549113 | orchestrator | 2025-06-02 20:12:26.549124 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-02 20:12:26.549135 | orchestrator | Monday 02 June 2025 20:11:41 +0000 (0:00:00.314) 0:00:29.366 *********** 2025-06-02 20:12:26.549161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549208 | orchestrator | 2025-06-02 20:12:26.549218 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-02 20:12:26.549228 | orchestrator | Monday 02 June 2025 20:11:42 +0000 (0:00:01.357) 0:00:30.724 *********** 2025-06-02 20:12:26.549237 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:26.549247 | orchestrator | 2025-06-02 20:12:26.549256 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-02 20:12:26.549266 | orchestrator | Monday 02 June 2025 20:11:43 +0000 (0:00:00.190) 0:00:30.914 *********** 2025-06-02 20:12:26.549275 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:26.549284 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:26.549294 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:26.549303 | orchestrator | 2025-06-02 20:12:26.549313 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 20:12:26.549322 | orchestrator | Monday 02 June 2025 20:11:43 +0000 (0:00:00.791) 0:00:31.706 *********** 2025-06-02 20:12:26.549331 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:12:26.549341 | orchestrator | 2025-06-02 20:12:26.549350 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-02 20:12:26.549360 | orchestrator | Monday 02 June 2025 20:11:44 +0000 (0:00:00.446) 0:00:32.152 *********** 2025-06-02 20:12:26.549376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549417 | orchestrator | 2025-06-02 20:12:26.549450 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-02 20:12:26.549461 | orchestrator | Monday 02 June 2025 20:11:46 +0000 (0:00:01.622) 0:00:33.775 *********** 2025-06-02 20:12:26.549471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:12:26.549481 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:26.549491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:12:26.549501 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:26.549519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:12:26.549529 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:26.549539 | orchestrator | 2025-06-02 20:12:26.549548 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-02 20:12:26.549558 | orchestrator | Monday 02 June 2025 20:11:47 +0000 (0:00:01.094) 0:00:34.869 *********** 2025-06-02 20:12:26.549572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:12:26.549588 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:26.549615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:12:26.549626 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:26.549636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:12:26.549646 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:26.549655 | orchestrator | 2025-06-02 20:12:26.549665 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-02 20:12:26.549680 | orchestrator | Monday 02 June 2025 20:11:48 +0000 (0:00:00.978) 0:00:35.847 *********** 2025-06-02 20:12:26.549691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549753 | orchestrator | 2025-06-02 20:12:26.549775 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-02 20:12:26.549793 | orchestrator | Monday 02 June 2025 20:11:50 +0000 (0:00:01.999) 0:00:37.847 *********** 2025-06-02 20:12:26.549809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.549879 | orchestrator | 2025-06-02 20:12:26.549895 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-02 20:12:26.549911 | orchestrator | Monday 02 June 2025 20:11:52 +0000 (0:00:02.290) 0:00:40.137 *********** 2025-06-02 20:12:26.549926 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 20:12:26.549957 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 20:12:26.549975 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 20:12:26.549991 | orchestrator | 2025-06-02 20:12:26.550007 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-02 20:12:26.550080 | orchestrator | Monday 02 June 2025 20:11:54 +0000 (0:00:01.798) 0:00:41.936 *********** 2025-06-02 20:12:26.550091 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:26.550100 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:26.550110 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:26.550119 | orchestrator | 2025-06-02 20:12:26.550128 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-02 20:12:26.550138 | orchestrator | Monday 02 June 2025 20:11:55 +0000 (0:00:01.708) 0:00:43.644 *********** 2025-06-02 20:12:26.550148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:12:26.550158 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:26.550181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:12:26.550208 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:26.550223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 20:12:26.550239 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:26.550255 | orchestrator | 2025-06-02 20:12:26.550272 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-02 20:12:26.550289 | orchestrator | Monday 02 June 2025 20:11:56 +0000 (0:00:00.753) 0:00:44.398 *********** 2025-06-02 20:12:26.550314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.550328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.550345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:26.550364 | orchestrator | 2025-06-02 20:12:26.550375 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-02 20:12:26.550390 | orchestrator | Monday 02 June 2025 20:11:59 +0000 (0:00:02.400) 0:00:46.798 *********** 2025-06-02 20:12:26.550405 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:26.550420 | orchestrator | 2025-06-02 20:12:26.550435 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-02 20:12:26.550450 | orchestrator | Monday 02 June 2025 20:12:01 +0000 (0:00:02.036) 0:00:48.835 *********** 2025-06-02 20:12:26.550464 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:26.550478 | orchestrator | 2025-06-02 20:12:26.550493 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-02 20:12:26.550509 | orchestrator | Monday 02 June 2025 20:12:03 +0000 (0:00:02.100) 0:00:50.935 *********** 2025-06-02 20:12:26.550523 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:26.550538 | orchestrator | 2025-06-02 20:12:26.550553 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 20:12:26.550569 | orchestrator | Monday 02 June 2025 20:12:16 +0000 (0:00:12.925) 0:01:03.861 *********** 2025-06-02 20:12:26.550586 | orchestrator | 2025-06-02 20:12:26.550630 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 20:12:26.550648 | orchestrator | Monday 02 June 2025 20:12:16 +0000 (0:00:00.131) 0:01:03.992 *********** 2025-06-02 20:12:26.550658 | orchestrator | 2025-06-02 20:12:26.550668 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 20:12:26.550680 | orchestrator | Monday 02 June 2025 20:12:16 +0000 (0:00:00.144) 0:01:04.137 *********** 2025-06-02 20:12:26.550696 | orchestrator | 2025-06-02 20:12:26.550710 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-02 20:12:26.550723 | orchestrator | Monday 02 June 2025 20:12:16 +0000 (0:00:00.148) 0:01:04.285 *********** 2025-06-02 20:12:26.550737 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:26.550751 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:26.550767 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:26.550783 | orchestrator | 2025-06-02 20:12:26.550800 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:12:26.550818 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:12:26.550845 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:12:26.550864 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:12:26.550882 | orchestrator | 2025-06-02 20:12:26.550899 | orchestrator | 2025-06-02 20:12:26.550917 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:12:26.550935 | orchestrator | Monday 02 June 2025 20:12:26 +0000 (0:00:09.558) 0:01:13.843 *********** 2025-06-02 20:12:26.550952 | orchestrator | =============================================================================== 2025-06-02 20:12:26.550970 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.93s 2025-06-02 20:12:26.550988 | orchestrator | placement : Restart placement-api container ----------------------------- 9.56s 2025-06-02 20:12:26.551005 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.77s 2025-06-02 20:12:26.551023 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.43s 2025-06-02 20:12:26.551041 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.18s 2025-06-02 20:12:26.551058 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.92s 2025-06-02 20:12:26.551076 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.45s 2025-06-02 20:12:26.551105 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.39s 2025-06-02 20:12:26.551123 | orchestrator | placement : Check placement containers ---------------------------------- 2.40s 2025-06-02 20:12:26.551140 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.29s 2025-06-02 20:12:26.551158 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.10s 2025-06-02 20:12:26.551175 | orchestrator | placement : Creating placement databases -------------------------------- 2.04s 2025-06-02 20:12:26.551193 | orchestrator | placement : Copying over config.json files for services ----------------- 2.00s 2025-06-02 20:12:26.551210 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.80s 2025-06-02 20:12:26.551228 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.71s 2025-06-02 20:12:26.551245 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.62s 2025-06-02 20:12:26.551263 | orchestrator | placement : include_tasks ----------------------------------------------- 1.39s 2025-06-02 20:12:26.551280 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.36s 2025-06-02 20:12:26.551298 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.09s 2025-06-02 20:12:26.551316 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.98s 2025-06-02 20:12:26.551333 | orchestrator | 2025-06-02 20:12:26 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:26.551351 | orchestrator | 2025-06-02 20:12:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:29.588451 | orchestrator | 2025-06-02 20:12:29 | INFO  | Task cad1e46d-0fcf-4265-9558-4a2c08c5e22e is in state SUCCESS 2025-06-02 20:12:29.588557 | orchestrator | 2025-06-02 20:12:29 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:12:29.588573 | orchestrator | 2025-06-02 20:12:29 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:29.589570 | orchestrator | 2025-06-02 20:12:29.589646 | orchestrator | 2025-06-02 20:12:29.589653 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:12:29.589658 | orchestrator | 2025-06-02 20:12:29.589662 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:12:29.589667 | orchestrator | Monday 02 June 2025 20:10:25 +0000 (0:00:00.272) 0:00:00.272 *********** 2025-06-02 20:12:29.589671 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:12:29.589676 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:12:29.589680 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:12:29.589684 | orchestrator | 2025-06-02 20:12:29.589687 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:12:29.589691 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.253) 0:00:00.526 *********** 2025-06-02 20:12:29.589696 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-02 20:12:29.589700 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-02 20:12:29.589703 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-02 20:12:29.589707 | orchestrator | 2025-06-02 20:12:29.589711 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-02 20:12:29.589715 | orchestrator | 2025-06-02 20:12:29.589718 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 20:12:29.589722 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.374) 0:00:00.900 *********** 2025-06-02 20:12:29.589726 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:12:29.589730 | orchestrator | 2025-06-02 20:12:29.589734 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-02 20:12:29.589738 | orchestrator | Monday 02 June 2025 20:10:27 +0000 (0:00:00.462) 0:00:01.363 *********** 2025-06-02 20:12:29.589742 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-02 20:12:29.589761 | orchestrator | 2025-06-02 20:12:29.589765 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-02 20:12:29.589769 | orchestrator | Monday 02 June 2025 20:10:30 +0000 (0:00:03.886) 0:00:05.249 *********** 2025-06-02 20:12:29.589781 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-02 20:12:29.589785 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-02 20:12:29.589789 | orchestrator | 2025-06-02 20:12:29.589792 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-02 20:12:29.589796 | orchestrator | Monday 02 June 2025 20:10:37 +0000 (0:00:06.447) 0:00:11.697 *********** 2025-06-02 20:12:29.589800 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-02 20:12:29.589804 | orchestrator | 2025-06-02 20:12:29.589807 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-02 20:12:29.589811 | orchestrator | Monday 02 June 2025 20:10:41 +0000 (0:00:03.704) 0:00:15.402 *********** 2025-06-02 20:12:29.589815 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:12:29.589819 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-02 20:12:29.589823 | orchestrator | 2025-06-02 20:12:29.589826 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-02 20:12:29.589830 | orchestrator | Monday 02 June 2025 20:10:44 +0000 (0:00:03.828) 0:00:19.230 *********** 2025-06-02 20:12:29.589834 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:12:29.589838 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-02 20:12:29.589841 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-02 20:12:29.589845 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-02 20:12:29.589849 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-02 20:12:29.589852 | orchestrator | 2025-06-02 20:12:29.589856 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-02 20:12:29.589860 | orchestrator | Monday 02 June 2025 20:11:00 +0000 (0:00:15.550) 0:00:34.781 *********** 2025-06-02 20:12:29.589863 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-02 20:12:29.589867 | orchestrator | 2025-06-02 20:12:29.589871 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-02 20:12:29.589875 | orchestrator | Monday 02 June 2025 20:11:04 +0000 (0:00:03.548) 0:00:38.329 *********** 2025-06-02 20:12:29.589881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.589895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.589904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.589911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.589915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.589920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.589930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.589937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.589941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.589945 | orchestrator | 2025-06-02 20:12:29.589951 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-02 20:12:29.589955 | orchestrator | Monday 02 June 2025 20:11:06 +0000 (0:00:02.192) 0:00:40.522 *********** 2025-06-02 20:12:29.589959 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-02 20:12:29.589963 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-02 20:12:29.589966 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-02 20:12:29.589970 | orchestrator | 2025-06-02 20:12:29.589973 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-02 20:12:29.589977 | orchestrator | Monday 02 June 2025 20:11:07 +0000 (0:00:01.514) 0:00:42.036 *********** 2025-06-02 20:12:29.589981 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:29.589984 | orchestrator | 2025-06-02 20:12:29.589988 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-02 20:12:29.589992 | orchestrator | Monday 02 June 2025 20:11:07 +0000 (0:00:00.168) 0:00:42.205 *********** 2025-06-02 20:12:29.589995 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:29.589999 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:29.590003 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:29.590006 | orchestrator | 2025-06-02 20:12:29.590010 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 20:12:29.590315 | orchestrator | Monday 02 June 2025 20:11:08 +0000 (0:00:00.488) 0:00:42.693 *********** 2025-06-02 20:12:29.590326 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:12:29.590331 | orchestrator | 2025-06-02 20:12:29.590335 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-02 20:12:29.590340 | orchestrator | Monday 02 June 2025 20:11:09 +0000 (0:00:00.998) 0:00:43.691 *********** 2025-06-02 20:12:29.590344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590409 | orchestrator | 2025-06-02 20:12:29.590413 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-02 20:12:29.590418 | orchestrator | Monday 02 June 2025 20:11:13 +0000 (0:00:03.919) 0:00:47.611 *********** 2025-06-02 20:12:29.590424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:12:29.590429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590443 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:29.590453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:12:29.590458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590469 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:29.590473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:12:29.590477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590488 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:29.590492 | orchestrator | 2025-06-02 20:12:29.590498 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-02 20:12:29.590501 | orchestrator | Monday 02 June 2025 20:11:15 +0000 (0:00:01.778) 0:00:49.389 *********** 2025-06-02 20:12:29.590506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:12:29.590512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590520 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:29.590524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:12:29.590534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590545 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:29.590549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:12:29.590555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590566 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:29.590570 | orchestrator | 2025-06-02 20:12:29.590573 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-02 20:12:29.590578 | orchestrator | Monday 02 June 2025 20:11:16 +0000 (0:00:01.780) 0:00:51.169 *********** 2025-06-02 20:12:29.590582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590653 | orchestrator | 2025-06-02 20:12:29.590657 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-02 20:12:29.590664 | orchestrator | Monday 02 June 2025 20:11:20 +0000 (0:00:03.585) 0:00:54.761 *********** 2025-06-02 20:12:29.590667 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:29.590672 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:29.590675 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:29.590679 | orchestrator | 2025-06-02 20:12:29.590683 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-02 20:12:29.590687 | orchestrator | Monday 02 June 2025 20:11:22 +0000 (0:00:02.170) 0:00:56.932 *********** 2025-06-02 20:12:29.590694 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:12:29.590698 | orchestrator | 2025-06-02 20:12:29.590702 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-02 20:12:29.590706 | orchestrator | Monday 02 June 2025 20:11:23 +0000 (0:00:01.212) 0:00:58.145 *********** 2025-06-02 20:12:29.590709 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:29.590713 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:29.590717 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:29.590721 | orchestrator | 2025-06-02 20:12:29.590724 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-02 20:12:29.590728 | orchestrator | Monday 02 June 2025 20:11:24 +0000 (0:00:00.523) 0:00:58.668 *********** 2025-06-02 20:12:29.590733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590807 | orchestrator | 2025-06-02 20:12:29.590813 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-02 20:12:29.590820 | orchestrator | Monday 02 June 2025 20:11:33 +0000 (0:00:08.944) 0:01:07.612 *********** 2025-06-02 20:12:29.590830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:12:29.590841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590853 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:29.590863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:12:29.590870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590886 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:29.590896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 20:12:29.590902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:12:29.590915 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:29.590922 | orchestrator | 2025-06-02 20:12:29.590928 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-02 20:12:29.590934 | orchestrator | Monday 02 June 2025 20:11:34 +0000 (0:00:01.271) 0:01:08.883 *********** 2025-06-02 20:12:29.590945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 20:12:29.590963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:12:29.590998 | orchestrator | 2025-06-02 20:12:29.591002 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 20:12:29.591005 | orchestrator | Monday 02 June 2025 20:11:37 +0000 (0:00:02.831) 0:01:11.715 *********** 2025-06-02 20:12:29.591009 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:12:29.591013 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:12:29.591017 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:12:29.591020 | orchestrator | 2025-06-02 20:12:29.591024 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-02 20:12:29.591028 | orchestrator | Monday 02 June 2025 20:11:38 +0000 (0:00:00.701) 0:01:12.416 *********** 2025-06-02 20:12:29.591031 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:29.591035 | orchestrator | 2025-06-02 20:12:29.591039 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-02 20:12:29.591043 | orchestrator | Monday 02 June 2025 20:11:40 +0000 (0:00:02.168) 0:01:14.584 *********** 2025-06-02 20:12:29.591046 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:29.591050 | orchestrator | 2025-06-02 20:12:29.591054 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-02 20:12:29.591057 | orchestrator | Monday 02 June 2025 20:11:42 +0000 (0:00:02.195) 0:01:16.780 *********** 2025-06-02 20:12:29.591062 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:29.591073 | orchestrator | 2025-06-02 20:12:29.591078 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 20:12:29.591082 | orchestrator | Monday 02 June 2025 20:11:54 +0000 (0:00:12.211) 0:01:28.992 *********** 2025-06-02 20:12:29.591085 | orchestrator | 2025-06-02 20:12:29.591089 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 20:12:29.591093 | orchestrator | Monday 02 June 2025 20:11:54 +0000 (0:00:00.121) 0:01:29.113 *********** 2025-06-02 20:12:29.591096 | orchestrator | 2025-06-02 20:12:29.591100 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 20:12:29.591104 | orchestrator | Monday 02 June 2025 20:11:54 +0000 (0:00:00.087) 0:01:29.201 *********** 2025-06-02 20:12:29.591108 | orchestrator | 2025-06-02 20:12:29.591112 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-02 20:12:29.591116 | orchestrator | Monday 02 June 2025 20:11:55 +0000 (0:00:00.100) 0:01:29.301 *********** 2025-06-02 20:12:29.591125 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:29.591129 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:29.591133 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:29.591136 | orchestrator | 2025-06-02 20:12:29.591140 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-02 20:12:29.591144 | orchestrator | Monday 02 June 2025 20:12:07 +0000 (0:00:12.416) 0:01:41.717 *********** 2025-06-02 20:12:29.591147 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:29.591151 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:29.591157 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:29.591161 | orchestrator | 2025-06-02 20:12:29.591165 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-02 20:12:29.591169 | orchestrator | Monday 02 June 2025 20:12:18 +0000 (0:00:10.912) 0:01:52.630 *********** 2025-06-02 20:12:29.591172 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:12:29.591176 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:12:29.591180 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:12:29.591183 | orchestrator | 2025-06-02 20:12:29.591187 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:12:29.591191 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 20:12:29.591196 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:12:29.591200 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:12:29.591204 | orchestrator | 2025-06-02 20:12:29.591208 | orchestrator | 2025-06-02 20:12:29.591212 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:12:29.591215 | orchestrator | Monday 02 June 2025 20:12:26 +0000 (0:00:08.281) 0:02:00.912 *********** 2025-06-02 20:12:29.591219 | orchestrator | =============================================================================== 2025-06-02 20:12:29.591223 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.55s 2025-06-02 20:12:29.591227 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.42s 2025-06-02 20:12:29.591230 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.21s 2025-06-02 20:12:29.591234 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.91s 2025-06-02 20:12:29.591240 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.94s 2025-06-02 20:12:29.591244 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.28s 2025-06-02 20:12:29.591248 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.45s 2025-06-02 20:12:29.591252 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.92s 2025-06-02 20:12:29.591256 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.89s 2025-06-02 20:12:29.591259 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.83s 2025-06-02 20:12:29.591263 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.70s 2025-06-02 20:12:29.591267 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.59s 2025-06-02 20:12:29.591271 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.55s 2025-06-02 20:12:29.591274 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.83s 2025-06-02 20:12:29.591278 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.20s 2025-06-02 20:12:29.591281 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.19s 2025-06-02 20:12:29.591285 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.17s 2025-06-02 20:12:29.591292 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.17s 2025-06-02 20:12:29.591295 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.78s 2025-06-02 20:12:29.591299 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.78s 2025-06-02 20:12:29.591303 | orchestrator | 2025-06-02 20:12:29 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:29.591307 | orchestrator | 2025-06-02 20:12:29 | INFO  | Task 16ba79ee-97cc-4b5f-bc62-4bf9918c2fda is in state STARTED 2025-06-02 20:12:29.591311 | orchestrator | 2025-06-02 20:12:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:32.614962 | orchestrator | 2025-06-02 20:12:32 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:12:32.617757 | orchestrator | 2025-06-02 20:12:32 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:32.617847 | orchestrator | 2025-06-02 20:12:32 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:32.617862 | orchestrator | 2025-06-02 20:12:32 | INFO  | Task 16ba79ee-97cc-4b5f-bc62-4bf9918c2fda is in state STARTED 2025-06-02 20:12:32.617874 | orchestrator | 2025-06-02 20:12:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:35.639862 | orchestrator | 2025-06-02 20:12:35 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:12:35.640096 | orchestrator | 2025-06-02 20:12:35 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:35.641639 | orchestrator | 2025-06-02 20:12:35 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:35.644043 | orchestrator | 2025-06-02 20:12:35 | INFO  | Task 16ba79ee-97cc-4b5f-bc62-4bf9918c2fda is in state STARTED 2025-06-02 20:12:35.644117 | orchestrator | 2025-06-02 20:12:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:38.682878 | orchestrator | 2025-06-02 20:12:38 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:12:38.684293 | orchestrator | 2025-06-02 20:12:38 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:38.684956 | orchestrator | 2025-06-02 20:12:38 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:12:38.685549 | orchestrator | 2025-06-02 20:12:38 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:38.686255 | orchestrator | 2025-06-02 20:12:38 | INFO  | Task 16ba79ee-97cc-4b5f-bc62-4bf9918c2fda is in state SUCCESS 2025-06-02 20:12:38.686291 | orchestrator | 2025-06-02 20:12:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:41.726688 | orchestrator | 2025-06-02 20:12:41 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:12:41.728273 | orchestrator | 2025-06-02 20:12:41 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:41.728316 | orchestrator | 2025-06-02 20:12:41 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:12:41.729376 | orchestrator | 2025-06-02 20:12:41 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:41.729417 | orchestrator | 2025-06-02 20:12:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:44.750299 | orchestrator | 2025-06-02 20:12:44 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:12:44.751842 | orchestrator | 2025-06-02 20:12:44 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:44.754829 | orchestrator | 2025-06-02 20:12:44 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:12:44.755748 | orchestrator | 2025-06-02 20:12:44 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:44.755782 | orchestrator | 2025-06-02 20:12:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:47.789020 | orchestrator | 2025-06-02 20:12:47 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:12:47.789189 | orchestrator | 2025-06-02 20:12:47 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:47.789907 | orchestrator | 2025-06-02 20:12:47 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:12:47.790723 | orchestrator | 2025-06-02 20:12:47 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:47.790761 | orchestrator | 2025-06-02 20:12:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:50.842412 | orchestrator | 2025-06-02 20:12:50 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:12:50.843819 | orchestrator | 2025-06-02 20:12:50 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:50.845557 | orchestrator | 2025-06-02 20:12:50 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:12:50.847129 | orchestrator | 2025-06-02 20:12:50 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:50.847195 | orchestrator | 2025-06-02 20:12:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:53.884178 | orchestrator | 2025-06-02 20:12:53 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:12:53.885748 | orchestrator | 2025-06-02 20:12:53 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:53.886328 | orchestrator | 2025-06-02 20:12:53 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:12:53.887257 | orchestrator | 2025-06-02 20:12:53 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:53.889255 | orchestrator | 2025-06-02 20:12:53 | INFO  | Task 19740d14-7aae-45e4-a630-6c34ac150980 is in state STARTED 2025-06-02 20:12:53.889293 | orchestrator | 2025-06-02 20:12:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:56.940055 | orchestrator | 2025-06-02 20:12:56 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:12:56.940158 | orchestrator | 2025-06-02 20:12:56 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:56.941070 | orchestrator | 2025-06-02 20:12:56 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:12:56.942123 | orchestrator | 2025-06-02 20:12:56 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:56.943359 | orchestrator | 2025-06-02 20:12:56 | INFO  | Task 19740d14-7aae-45e4-a630-6c34ac150980 is in state STARTED 2025-06-02 20:12:56.943383 | orchestrator | 2025-06-02 20:12:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:12:59.982294 | orchestrator | 2025-06-02 20:12:59 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:12:59.983007 | orchestrator | 2025-06-02 20:12:59 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:12:59.986251 | orchestrator | 2025-06-02 20:12:59 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:12:59.987052 | orchestrator | 2025-06-02 20:12:59 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:12:59.987392 | orchestrator | 2025-06-02 20:12:59 | INFO  | Task 19740d14-7aae-45e4-a630-6c34ac150980 is in state STARTED 2025-06-02 20:12:59.987421 | orchestrator | 2025-06-02 20:12:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:03.025947 | orchestrator | 2025-06-02 20:13:03 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:03.026963 | orchestrator | 2025-06-02 20:13:03 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:03.028039 | orchestrator | 2025-06-02 20:13:03 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:03.030608 | orchestrator | 2025-06-02 20:13:03 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:13:03.030660 | orchestrator | 2025-06-02 20:13:03 | INFO  | Task 19740d14-7aae-45e4-a630-6c34ac150980 is in state STARTED 2025-06-02 20:13:03.030668 | orchestrator | 2025-06-02 20:13:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:06.067190 | orchestrator | 2025-06-02 20:13:06 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:06.068614 | orchestrator | 2025-06-02 20:13:06 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:06.071827 | orchestrator | 2025-06-02 20:13:06 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:06.072703 | orchestrator | 2025-06-02 20:13:06 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:13:06.073020 | orchestrator | 2025-06-02 20:13:06 | INFO  | Task 19740d14-7aae-45e4-a630-6c34ac150980 is in state STARTED 2025-06-02 20:13:06.073048 | orchestrator | 2025-06-02 20:13:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:09.119341 | orchestrator | 2025-06-02 20:13:09 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:09.119453 | orchestrator | 2025-06-02 20:13:09 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:09.119468 | orchestrator | 2025-06-02 20:13:09 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:09.123720 | orchestrator | 2025-06-02 20:13:09 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:13:09.123805 | orchestrator | 2025-06-02 20:13:09 | INFO  | Task 19740d14-7aae-45e4-a630-6c34ac150980 is in state STARTED 2025-06-02 20:13:09.123819 | orchestrator | 2025-06-02 20:13:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:12.160307 | orchestrator | 2025-06-02 20:13:12 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:12.162724 | orchestrator | 2025-06-02 20:13:12 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:12.162807 | orchestrator | 2025-06-02 20:13:12 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:12.162821 | orchestrator | 2025-06-02 20:13:12 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:13:12.162833 | orchestrator | 2025-06-02 20:13:12 | INFO  | Task 19740d14-7aae-45e4-a630-6c34ac150980 is in state SUCCESS 2025-06-02 20:13:12.162844 | orchestrator | 2025-06-02 20:13:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:15.191137 | orchestrator | 2025-06-02 20:13:15 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:15.191275 | orchestrator | 2025-06-02 20:13:15 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:15.192821 | orchestrator | 2025-06-02 20:13:15 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:15.193373 | orchestrator | 2025-06-02 20:13:15 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:13:15.193624 | orchestrator | 2025-06-02 20:13:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:18.221039 | orchestrator | 2025-06-02 20:13:18 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:18.223201 | orchestrator | 2025-06-02 20:13:18 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:18.223588 | orchestrator | 2025-06-02 20:13:18 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:18.224079 | orchestrator | 2025-06-02 20:13:18 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:13:18.224100 | orchestrator | 2025-06-02 20:13:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:21.255921 | orchestrator | 2025-06-02 20:13:21 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:21.257964 | orchestrator | 2025-06-02 20:13:21 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:21.261936 | orchestrator | 2025-06-02 20:13:21 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:21.263297 | orchestrator | 2025-06-02 20:13:21 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:13:21.263405 | orchestrator | 2025-06-02 20:13:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:24.311771 | orchestrator | 2025-06-02 20:13:24 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:24.311853 | orchestrator | 2025-06-02 20:13:24 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:24.312523 | orchestrator | 2025-06-02 20:13:24 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:24.312840 | orchestrator | 2025-06-02 20:13:24 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:13:24.312862 | orchestrator | 2025-06-02 20:13:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:27.343854 | orchestrator | 2025-06-02 20:13:27 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:27.344362 | orchestrator | 2025-06-02 20:13:27 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:27.346931 | orchestrator | 2025-06-02 20:13:27 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:27.348014 | orchestrator | 2025-06-02 20:13:27 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:13:27.348060 | orchestrator | 2025-06-02 20:13:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:30.384342 | orchestrator | 2025-06-02 20:13:30 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:30.385944 | orchestrator | 2025-06-02 20:13:30 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:30.386200 | orchestrator | 2025-06-02 20:13:30 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:30.386229 | orchestrator | 2025-06-02 20:13:30 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state STARTED 2025-06-02 20:13:30.386256 | orchestrator | 2025-06-02 20:13:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:33.409303 | orchestrator | 2025-06-02 20:13:33 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:13:33.409523 | orchestrator | 2025-06-02 20:13:33 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:33.418394 | orchestrator | 2025-06-02 20:13:33 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:33.418777 | orchestrator | 2025-06-02 20:13:33 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:33.420448 | orchestrator | 2025-06-02 20:13:33 | INFO  | Task 5073ad63-bfdc-4962-83a3-a999546f53f8 is in state SUCCESS 2025-06-02 20:13:33.422901 | orchestrator | 2025-06-02 20:13:33.422995 | orchestrator | 2025-06-02 20:13:33.423012 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:13:33.423026 | orchestrator | 2025-06-02 20:13:33.423038 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:13:33.423050 | orchestrator | Monday 02 June 2025 20:12:34 +0000 (0:00:00.178) 0:00:00.178 *********** 2025-06-02 20:13:33.423061 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:13:33.423073 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:13:33.423084 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:13:33.423094 | orchestrator | 2025-06-02 20:13:33.423105 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:13:33.423116 | orchestrator | Monday 02 June 2025 20:12:35 +0000 (0:00:00.411) 0:00:00.590 *********** 2025-06-02 20:13:33.423127 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 20:13:33.423138 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 20:13:33.423148 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 20:13:33.423159 | orchestrator | 2025-06-02 20:13:33.423169 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-02 20:13:33.423180 | orchestrator | 2025-06-02 20:13:33.423191 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-02 20:13:33.423201 | orchestrator | Monday 02 June 2025 20:12:36 +0000 (0:00:00.973) 0:00:01.563 *********** 2025-06-02 20:13:33.423212 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:13:33.423222 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:13:33.423233 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:13:33.423244 | orchestrator | 2025-06-02 20:13:33.423254 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:13:33.423266 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:33.423278 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:33.423289 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:13:33.423299 | orchestrator | 2025-06-02 20:13:33.423310 | orchestrator | 2025-06-02 20:13:33.423321 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:13:33.423331 | orchestrator | Monday 02 June 2025 20:12:36 +0000 (0:00:00.928) 0:00:02.491 *********** 2025-06-02 20:13:33.423359 | orchestrator | =============================================================================== 2025-06-02 20:13:33.423652 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2025-06-02 20:13:33.423671 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.93s 2025-06-02 20:13:33.423689 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-06-02 20:13:33.423708 | orchestrator | 2025-06-02 20:13:33.423727 | orchestrator | None 2025-06-02 20:13:33.423745 | orchestrator | 2025-06-02 20:13:33.423762 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:13:33.423779 | orchestrator | 2025-06-02 20:13:33.423798 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:13:33.423848 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.315) 0:00:00.315 *********** 2025-06-02 20:13:33.423860 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:13:33.423871 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:13:33.423882 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:13:33.423892 | orchestrator | 2025-06-02 20:13:33.423903 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:13:33.423914 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.501) 0:00:00.817 *********** 2025-06-02 20:13:33.423925 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-02 20:13:33.423936 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-02 20:13:33.423946 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-02 20:13:33.423957 | orchestrator | 2025-06-02 20:13:33.423967 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-02 20:13:33.423978 | orchestrator | 2025-06-02 20:13:33.423988 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 20:13:33.423999 | orchestrator | Monday 02 June 2025 20:10:27 +0000 (0:00:00.615) 0:00:01.432 *********** 2025-06-02 20:13:33.424026 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:13:33.424074 | orchestrator | 2025-06-02 20:13:33.424084 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-02 20:13:33.424094 | orchestrator | Monday 02 June 2025 20:10:27 +0000 (0:00:00.594) 0:00:02.027 *********** 2025-06-02 20:13:33.424103 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-02 20:13:33.424113 | orchestrator | 2025-06-02 20:13:33.424122 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-02 20:13:33.424132 | orchestrator | Monday 02 June 2025 20:10:31 +0000 (0:00:03.840) 0:00:05.868 *********** 2025-06-02 20:13:33.424141 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-02 20:13:33.424151 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-02 20:13:33.424161 | orchestrator | 2025-06-02 20:13:33.424170 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-02 20:13:33.424180 | orchestrator | Monday 02 June 2025 20:10:38 +0000 (0:00:06.560) 0:00:12.429 *********** 2025-06-02 20:13:33.424190 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:13:33.424199 | orchestrator | 2025-06-02 20:13:33.424209 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-02 20:13:33.424219 | orchestrator | Monday 02 June 2025 20:10:41 +0000 (0:00:03.441) 0:00:15.871 *********** 2025-06-02 20:13:33.424245 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:13:33.424255 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-02 20:13:33.424265 | orchestrator | 2025-06-02 20:13:33.424274 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-02 20:13:33.424283 | orchestrator | Monday 02 June 2025 20:10:45 +0000 (0:00:03.765) 0:00:19.636 *********** 2025-06-02 20:13:33.424293 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:13:33.424302 | orchestrator | 2025-06-02 20:13:33.424312 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-02 20:13:33.424321 | orchestrator | Monday 02 June 2025 20:10:49 +0000 (0:00:03.841) 0:00:23.478 *********** 2025-06-02 20:13:33.424331 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-02 20:13:33.424340 | orchestrator | 2025-06-02 20:13:33.424349 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-02 20:13:33.424359 | orchestrator | Monday 02 June 2025 20:10:53 +0000 (0:00:04.145) 0:00:27.623 *********** 2025-06-02 20:13:33.424371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.424400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.424411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.424422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424689 | orchestrator | 2025-06-02 20:13:33.424708 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-02 20:13:33.424726 | orchestrator | Monday 02 June 2025 20:10:56 +0000 (0:00:03.039) 0:00:30.663 *********** 2025-06-02 20:13:33.424744 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:13:33.424764 | orchestrator | 2025-06-02 20:13:33.424783 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-02 20:13:33.424801 | orchestrator | Monday 02 June 2025 20:10:56 +0000 (0:00:00.134) 0:00:30.797 *********** 2025-06-02 20:13:33.424813 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:13:33.424823 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:13:33.424832 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:13:33.424842 | orchestrator | 2025-06-02 20:13:33.424851 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 20:13:33.424860 | orchestrator | Monday 02 June 2025 20:10:56 +0000 (0:00:00.283) 0:00:31.080 *********** 2025-06-02 20:13:33.424870 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:13:33.424879 | orchestrator | 2025-06-02 20:13:33.424888 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-02 20:13:33.424898 | orchestrator | Monday 02 June 2025 20:10:57 +0000 (0:00:00.730) 0:00:31.811 *********** 2025-06-02 20:13:33.424913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.424924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.424935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.424960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.424992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.425152 | orchestrator | 2025-06-02 20:13:33.425161 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-02 20:13:33.425171 | orchestrator | Monday 02 June 2025 20:11:03 +0000 (0:00:05.545) 0:00:37.356 *********** 2025-06-02 20:13:33.425185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.425196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:13:33.425206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425258 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:13:33.425272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.425282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:13:33.425292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425345 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:13:33.425355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.425469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:13:33.425491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425643 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:13:33.425653 | orchestrator | 2025-06-02 20:13:33.425662 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-02 20:13:33.425672 | orchestrator | Monday 02 June 2025 20:11:04 +0000 (0:00:00.861) 0:00:38.218 *********** 2025-06-02 20:13:33.425698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.425715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:13:33.425741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425819 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:13:33.425838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.425848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:13:33.425865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425912 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:13:33.425925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.425936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:13:33.425951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.425997 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:13:33.426006 | orchestrator | 2025-06-02 20:13:33.426073 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-02 20:13:33.426086 | orchestrator | Monday 02 June 2025 20:11:05 +0000 (0:00:01.391) 0:00:39.609 *********** 2025-06-02 20:13:33.426100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.426119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.426129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.426147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.426956 | orchestrator | 2025-06-02 20:13:33.426968 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-02 20:13:33.426981 | orchestrator | Monday 02 June 2025 20:11:12 +0000 (0:00:06.867) 0:00:46.477 *********** 2025-06-02 20:13:33.426993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.427059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.427074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.427112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427325 | orchestrator | 2025-06-02 20:13:33.427337 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-02 20:13:33.427348 | orchestrator | Monday 02 June 2025 20:11:31 +0000 (0:00:19.672) 0:01:06.149 *********** 2025-06-02 20:13:33.427359 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 20:13:33.427376 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 20:13:33.427386 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 20:13:33.427397 | orchestrator | 2025-06-02 20:13:33.427408 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-02 20:13:33.427420 | orchestrator | Monday 02 June 2025 20:11:36 +0000 (0:00:04.572) 0:01:10.721 *********** 2025-06-02 20:13:33.427432 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 20:13:33.427445 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 20:13:33.427458 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 20:13:33.427470 | orchestrator | 2025-06-02 20:13:33.427481 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-02 20:13:33.427495 | orchestrator | Monday 02 June 2025 20:11:40 +0000 (0:00:03.729) 0:01:14.450 *********** 2025-06-02 20:13:33.427512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.427526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.427546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.427581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.427619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.427632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.427644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.427679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.427702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.427714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.427745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.427758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.427776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427816 | orchestrator | 2025-06-02 20:13:33.427827 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-02 20:13:33.427838 | orchestrator | Monday 02 June 2025 20:11:42 +0000 (0:00:02.698) 0:01:17.149 *********** 2025-06-02 20:13:33.427854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.427866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.427877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.427908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.427928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.427954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.427983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428266 | orchestrator | 2025-06-02 20:13:33.428277 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 20:13:33.428288 | orchestrator | Monday 02 June 2025 20:11:46 +0000 (0:00:03.055) 0:01:20.204 *********** 2025-06-02 20:13:33.428299 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:13:33.428310 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:13:33.428321 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:13:33.428332 | orchestrator | 2025-06-02 20:13:33.428343 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-02 20:13:33.428353 | orchestrator | Monday 02 June 2025 20:11:46 +0000 (0:00:00.706) 0:01:20.910 *********** 2025-06-02 20:13:33.428370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.428382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:13:33.428400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428453 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:13:33.428469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.428480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:13:33.428498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428569 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:13:33.428586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 20:13:33.428598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 20:13:33.428616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 20:13:33.428667 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:13:33.428678 | orchestrator | 2025-06-02 20:13:33.428689 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-02 20:13:33.428700 | orchestrator | Monday 02 June 2025 20:11:47 +0000 (0:00:00.803) 0:01:21.714 *********** 2025-06-02 20:13:33.428715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.428733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.428751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 20:13:33.428763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 20:13:33.428972 | orchestrator | 2025-06-02 20:13:33.428983 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 20:13:33.428994 | orchestrator | Monday 02 June 2025 20:11:52 +0000 (0:00:05.284) 0:01:26.998 *********** 2025-06-02 20:13:33.429004 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:13:33.429015 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:13:33.429026 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:13:33.429037 | orchestrator | 2025-06-02 20:13:33.429047 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-02 20:13:33.429058 | orchestrator | Monday 02 June 2025 20:11:53 +0000 (0:00:00.305) 0:01:27.304 *********** 2025-06-02 20:13:33.429069 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-02 20:13:33.429086 | orchestrator | 2025-06-02 20:13:33.429096 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-02 20:13:33.429112 | orchestrator | Monday 02 June 2025 20:11:55 +0000 (0:00:02.433) 0:01:29.737 *********** 2025-06-02 20:13:33.429122 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:13:33.429133 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-02 20:13:33.429144 | orchestrator | 2025-06-02 20:13:33.429155 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-02 20:13:33.429165 | orchestrator | Monday 02 June 2025 20:11:57 +0000 (0:00:02.278) 0:01:32.016 *********** 2025-06-02 20:13:33.429175 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:13:33.429186 | orchestrator | 2025-06-02 20:13:33.429197 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 20:13:33.429207 | orchestrator | Monday 02 June 2025 20:12:13 +0000 (0:00:15.957) 0:01:47.974 *********** 2025-06-02 20:13:33.429218 | orchestrator | 2025-06-02 20:13:33.429228 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 20:13:33.429239 | orchestrator | Monday 02 June 2025 20:12:13 +0000 (0:00:00.133) 0:01:48.107 *********** 2025-06-02 20:13:33.429249 | orchestrator | 2025-06-02 20:13:33.429260 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 20:13:33.429271 | orchestrator | Monday 02 June 2025 20:12:14 +0000 (0:00:00.149) 0:01:48.257 *********** 2025-06-02 20:13:33.429281 | orchestrator | 2025-06-02 20:13:33.429292 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-02 20:13:33.429303 | orchestrator | Monday 02 June 2025 20:12:14 +0000 (0:00:00.149) 0:01:48.407 *********** 2025-06-02 20:13:33.429313 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:13:33.429324 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:13:33.429334 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:13:33.429345 | orchestrator | 2025-06-02 20:13:33.429355 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-02 20:13:33.429366 | orchestrator | Monday 02 June 2025 20:12:28 +0000 (0:00:14.715) 0:02:03.122 *********** 2025-06-02 20:13:33.429377 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:13:33.429387 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:13:33.429398 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:13:33.429409 | orchestrator | 2025-06-02 20:13:33.429419 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-02 20:13:33.429430 | orchestrator | Monday 02 June 2025 20:12:41 +0000 (0:00:12.059) 0:02:15.181 *********** 2025-06-02 20:13:33.429441 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:13:33.429451 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:13:33.429462 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:13:33.429472 | orchestrator | 2025-06-02 20:13:33.429483 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-02 20:13:33.429494 | orchestrator | Monday 02 June 2025 20:12:51 +0000 (0:00:10.786) 0:02:25.968 *********** 2025-06-02 20:13:33.429504 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:13:33.429515 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:13:33.429525 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:13:33.429536 | orchestrator | 2025-06-02 20:13:33.429546 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-02 20:13:33.429613 | orchestrator | Monday 02 June 2025 20:13:04 +0000 (0:00:12.267) 0:02:38.236 *********** 2025-06-02 20:13:33.429625 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:13:33.429636 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:13:33.429647 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:13:33.429657 | orchestrator | 2025-06-02 20:13:33.429668 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-02 20:13:33.429742 | orchestrator | Monday 02 June 2025 20:13:15 +0000 (0:00:11.556) 0:02:49.792 *********** 2025-06-02 20:13:33.429756 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:13:33.429777 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:13:33.429788 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:13:33.429798 | orchestrator | 2025-06-02 20:13:33.429809 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-02 20:13:33.429819 | orchestrator | Monday 02 June 2025 20:13:24 +0000 (0:00:09.162) 0:02:58.955 *********** 2025-06-02 20:13:33.429830 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:13:33.429841 | orchestrator | 2025-06-02 20:13:33.429851 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:13:33.429862 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 20:13:33.429875 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:13:33.429886 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:13:33.429897 | orchestrator | 2025-06-02 20:13:33.429908 | orchestrator | 2025-06-02 20:13:33.429918 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:13:33.429929 | orchestrator | Monday 02 June 2025 20:13:31 +0000 (0:00:07.172) 0:03:06.128 *********** 2025-06-02 20:13:33.429940 | orchestrator | =============================================================================== 2025-06-02 20:13:33.429950 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.67s 2025-06-02 20:13:33.429961 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.96s 2025-06-02 20:13:33.429972 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.72s 2025-06-02 20:13:33.429982 | orchestrator | designate : Restart designate-producer container ----------------------- 12.27s 2025-06-02 20:13:33.429993 | orchestrator | designate : Restart designate-api container ---------------------------- 12.06s 2025-06-02 20:13:33.430003 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.56s 2025-06-02 20:13:33.430014 | orchestrator | designate : Restart designate-central container ------------------------ 10.79s 2025-06-02 20:13:33.430067 | orchestrator | designate : Restart designate-worker container -------------------------- 9.16s 2025-06-02 20:13:33.430078 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.17s 2025-06-02 20:13:33.430089 | orchestrator | designate : Copying over config.json files for services ----------------- 6.87s 2025-06-02 20:13:33.430100 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.56s 2025-06-02 20:13:33.430110 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.55s 2025-06-02 20:13:33.430124 | orchestrator | designate : Check designate containers ---------------------------------- 5.28s 2025-06-02 20:13:33.430135 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.57s 2025-06-02 20:13:33.430146 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.15s 2025-06-02 20:13:33.430156 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.84s 2025-06-02 20:13:33.430167 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.84s 2025-06-02 20:13:33.430178 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.77s 2025-06-02 20:13:33.430189 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.73s 2025-06-02 20:13:33.430199 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.44s 2025-06-02 20:13:33.430210 | orchestrator | 2025-06-02 20:13:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:36.445571 | orchestrator | 2025-06-02 20:13:36 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:13:36.445938 | orchestrator | 2025-06-02 20:13:36 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:36.446686 | orchestrator | 2025-06-02 20:13:36 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:36.447225 | orchestrator | 2025-06-02 20:13:36 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:36.447244 | orchestrator | 2025-06-02 20:13:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:39.475245 | orchestrator | 2025-06-02 20:13:39 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:13:39.475351 | orchestrator | 2025-06-02 20:13:39 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:39.479263 | orchestrator | 2025-06-02 20:13:39 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:39.479659 | orchestrator | 2025-06-02 20:13:39 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:39.479930 | orchestrator | 2025-06-02 20:13:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:42.513932 | orchestrator | 2025-06-02 20:13:42 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:13:42.514445 | orchestrator | 2025-06-02 20:13:42 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:42.516102 | orchestrator | 2025-06-02 20:13:42 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:42.516881 | orchestrator | 2025-06-02 20:13:42 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:42.516979 | orchestrator | 2025-06-02 20:13:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:45.567620 | orchestrator | 2025-06-02 20:13:45 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:13:45.571019 | orchestrator | 2025-06-02 20:13:45 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:45.576979 | orchestrator | 2025-06-02 20:13:45 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:45.579122 | orchestrator | 2025-06-02 20:13:45 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:45.579182 | orchestrator | 2025-06-02 20:13:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:48.624627 | orchestrator | 2025-06-02 20:13:48 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:13:48.626587 | orchestrator | 2025-06-02 20:13:48 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:48.627249 | orchestrator | 2025-06-02 20:13:48 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:48.628016 | orchestrator | 2025-06-02 20:13:48 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:48.628045 | orchestrator | 2025-06-02 20:13:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:51.686298 | orchestrator | 2025-06-02 20:13:51 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:13:51.686795 | orchestrator | 2025-06-02 20:13:51 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:51.689306 | orchestrator | 2025-06-02 20:13:51 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:51.690808 | orchestrator | 2025-06-02 20:13:51 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:51.690855 | orchestrator | 2025-06-02 20:13:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:54.742412 | orchestrator | 2025-06-02 20:13:54 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:13:54.742942 | orchestrator | 2025-06-02 20:13:54 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:54.743844 | orchestrator | 2025-06-02 20:13:54 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:54.744978 | orchestrator | 2025-06-02 20:13:54 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:54.745148 | orchestrator | 2025-06-02 20:13:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:13:57.780966 | orchestrator | 2025-06-02 20:13:57 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:13:57.781070 | orchestrator | 2025-06-02 20:13:57 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:13:57.781433 | orchestrator | 2025-06-02 20:13:57 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:13:57.782300 | orchestrator | 2025-06-02 20:13:57 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:13:57.782360 | orchestrator | 2025-06-02 20:13:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:00.819126 | orchestrator | 2025-06-02 20:14:00 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:14:00.820025 | orchestrator | 2025-06-02 20:14:00 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:14:00.824787 | orchestrator | 2025-06-02 20:14:00 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:00.827270 | orchestrator | 2025-06-02 20:14:00 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:00.827342 | orchestrator | 2025-06-02 20:14:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:03.862117 | orchestrator | 2025-06-02 20:14:03 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:14:03.862225 | orchestrator | 2025-06-02 20:14:03 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:14:03.862240 | orchestrator | 2025-06-02 20:14:03 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:03.862252 | orchestrator | 2025-06-02 20:14:03 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:03.862263 | orchestrator | 2025-06-02 20:14:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:06.906010 | orchestrator | 2025-06-02 20:14:06 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:14:06.906159 | orchestrator | 2025-06-02 20:14:06 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:14:06.906174 | orchestrator | 2025-06-02 20:14:06 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:06.906185 | orchestrator | 2025-06-02 20:14:06 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:06.906197 | orchestrator | 2025-06-02 20:14:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:09.931082 | orchestrator | 2025-06-02 20:14:09 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state STARTED 2025-06-02 20:14:09.932182 | orchestrator | 2025-06-02 20:14:09 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:14:09.932210 | orchestrator | 2025-06-02 20:14:09 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:09.932805 | orchestrator | 2025-06-02 20:14:09 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:09.932904 | orchestrator | 2025-06-02 20:14:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:12.962946 | orchestrator | 2025-06-02 20:14:12 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:12.963324 | orchestrator | 2025-06-02 20:14:12 | INFO  | Task a7dbb990-ff37-4850-8197-74aae0218bea is in state SUCCESS 2025-06-02 20:14:12.964054 | orchestrator | 2025-06-02 20:14:12 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:14:12.965875 | orchestrator | 2025-06-02 20:14:12 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:12.968254 | orchestrator | 2025-06-02 20:14:12 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:12.968632 | orchestrator | 2025-06-02 20:14:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:16.011376 | orchestrator | 2025-06-02 20:14:16 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:16.011631 | orchestrator | 2025-06-02 20:14:16 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:14:16.013509 | orchestrator | 2025-06-02 20:14:16 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:16.014743 | orchestrator | 2025-06-02 20:14:16 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:16.014796 | orchestrator | 2025-06-02 20:14:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:19.052943 | orchestrator | 2025-06-02 20:14:19 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:19.054312 | orchestrator | 2025-06-02 20:14:19 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:14:19.055201 | orchestrator | 2025-06-02 20:14:19 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:19.056460 | orchestrator | 2025-06-02 20:14:19 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:19.056501 | orchestrator | 2025-06-02 20:14:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:22.088784 | orchestrator | 2025-06-02 20:14:22 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:22.089709 | orchestrator | 2025-06-02 20:14:22 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:14:22.089965 | orchestrator | 2025-06-02 20:14:22 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:22.090985 | orchestrator | 2025-06-02 20:14:22 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:22.091050 | orchestrator | 2025-06-02 20:14:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:25.128590 | orchestrator | 2025-06-02 20:14:25 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:25.129388 | orchestrator | 2025-06-02 20:14:25 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:14:25.134429 | orchestrator | 2025-06-02 20:14:25 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:25.134501 | orchestrator | 2025-06-02 20:14:25 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:25.134551 | orchestrator | 2025-06-02 20:14:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:28.166839 | orchestrator | 2025-06-02 20:14:28 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:28.167251 | orchestrator | 2025-06-02 20:14:28 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state STARTED 2025-06-02 20:14:28.168461 | orchestrator | 2025-06-02 20:14:28 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:28.169177 | orchestrator | 2025-06-02 20:14:28 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:28.169214 | orchestrator | 2025-06-02 20:14:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:31.222631 | orchestrator | 2025-06-02 20:14:31 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:31.227953 | orchestrator | 2025-06-02 20:14:31.228020 | orchestrator | 2025-06-02 20:14:31.228028 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:14:31.228037 | orchestrator | 2025-06-02 20:14:31.228042 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:14:31.228047 | orchestrator | Monday 02 June 2025 20:13:37 +0000 (0:00:00.422) 0:00:00.422 *********** 2025-06-02 20:14:31.228052 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:14:31.228059 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:14:31.228064 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:14:31.228069 | orchestrator | ok: [testbed-manager] 2025-06-02 20:14:31.228073 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:14:31.228078 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:14:31.228083 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:14:31.228091 | orchestrator | 2025-06-02 20:14:31.228099 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:14:31.228116 | orchestrator | Monday 02 June 2025 20:13:38 +0000 (0:00:00.730) 0:00:01.152 *********** 2025-06-02 20:14:31.228122 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-02 20:14:31.228126 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-02 20:14:31.228131 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-02 20:14:31.228136 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-02 20:14:31.228141 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-02 20:14:31.228145 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-02 20:14:31.228153 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-02 20:14:31.228158 | orchestrator | 2025-06-02 20:14:31.228163 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 20:14:31.228167 | orchestrator | 2025-06-02 20:14:31.228172 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-02 20:14:31.228176 | orchestrator | Monday 02 June 2025 20:13:38 +0000 (0:00:00.740) 0:00:01.893 *********** 2025-06-02 20:14:31.228182 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:14:31.228188 | orchestrator | 2025-06-02 20:14:31.228193 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-02 20:14:31.228197 | orchestrator | Monday 02 June 2025 20:13:40 +0000 (0:00:01.870) 0:00:03.763 *********** 2025-06-02 20:14:31.228202 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-02 20:14:31.228207 | orchestrator | 2025-06-02 20:14:31.228211 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-02 20:14:31.228216 | orchestrator | Monday 02 June 2025 20:13:44 +0000 (0:00:03.265) 0:00:07.028 *********** 2025-06-02 20:14:31.228221 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-02 20:14:31.228227 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-02 20:14:31.228232 | orchestrator | 2025-06-02 20:14:31.228236 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-02 20:14:31.228255 | orchestrator | Monday 02 June 2025 20:13:50 +0000 (0:00:06.629) 0:00:13.658 *********** 2025-06-02 20:14:31.228260 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:14:31.228265 | orchestrator | 2025-06-02 20:14:31.228270 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-02 20:14:31.228275 | orchestrator | Monday 02 June 2025 20:13:54 +0000 (0:00:03.467) 0:00:17.126 *********** 2025-06-02 20:14:31.228279 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:14:31.228284 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-02 20:14:31.228288 | orchestrator | 2025-06-02 20:14:31.228293 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-02 20:14:31.228297 | orchestrator | Monday 02 June 2025 20:13:58 +0000 (0:00:04.266) 0:00:21.392 *********** 2025-06-02 20:14:31.228302 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:14:31.228307 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-02 20:14:31.228311 | orchestrator | 2025-06-02 20:14:31.228316 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-02 20:14:31.228324 | orchestrator | Monday 02 June 2025 20:14:05 +0000 (0:00:06.734) 0:00:28.127 *********** 2025-06-02 20:14:31.228329 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-02 20:14:31.228334 | orchestrator | 2025-06-02 20:14:31.228338 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:14:31.228343 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:14:31.228347 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:14:31.228352 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:14:31.228357 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:14:31.228362 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:14:31.228379 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:14:31.228387 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:14:31.228394 | orchestrator | 2025-06-02 20:14:31.228402 | orchestrator | 2025-06-02 20:14:31.228410 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:14:31.228416 | orchestrator | Monday 02 June 2025 20:14:10 +0000 (0:00:05.557) 0:00:33.685 *********** 2025-06-02 20:14:31.228423 | orchestrator | =============================================================================== 2025-06-02 20:14:31.228430 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.73s 2025-06-02 20:14:31.228437 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.63s 2025-06-02 20:14:31.228449 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.56s 2025-06-02 20:14:31.228456 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.27s 2025-06-02 20:14:31.228463 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.47s 2025-06-02 20:14:31.228470 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.27s 2025-06-02 20:14:31.228477 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.87s 2025-06-02 20:14:31.228484 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-06-02 20:14:31.228491 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.73s 2025-06-02 20:14:31.228535 | orchestrator | 2025-06-02 20:14:31.228543 | orchestrator | 2025-06-02 20:14:31.228550 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:14:31.228557 | orchestrator | 2025-06-02 20:14:31.228565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:14:31.228573 | orchestrator | Monday 02 June 2025 20:12:34 +0000 (0:00:00.279) 0:00:00.279 *********** 2025-06-02 20:14:31.228580 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:14:31.228589 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:14:31.228593 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:14:31.228598 | orchestrator | 2025-06-02 20:14:31.228602 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:14:31.228607 | orchestrator | Monday 02 June 2025 20:12:34 +0000 (0:00:00.224) 0:00:00.503 *********** 2025-06-02 20:14:31.228611 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-02 20:14:31.228615 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-02 20:14:31.228620 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-02 20:14:31.228624 | orchestrator | 2025-06-02 20:14:31.228629 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-02 20:14:31.228633 | orchestrator | 2025-06-02 20:14:31.228637 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 20:14:31.228642 | orchestrator | Monday 02 June 2025 20:12:34 +0000 (0:00:00.448) 0:00:00.952 *********** 2025-06-02 20:14:31.228646 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:14:31.228651 | orchestrator | 2025-06-02 20:14:31.228656 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-02 20:14:31.228660 | orchestrator | Monday 02 June 2025 20:12:35 +0000 (0:00:00.893) 0:00:01.846 *********** 2025-06-02 20:14:31.228665 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-02 20:14:31.228669 | orchestrator | 2025-06-02 20:14:31.228674 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-02 20:14:31.228678 | orchestrator | Monday 02 June 2025 20:12:39 +0000 (0:00:03.819) 0:00:05.666 *********** 2025-06-02 20:14:31.228682 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-02 20:14:31.228687 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-02 20:14:31.228692 | orchestrator | 2025-06-02 20:14:31.228696 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-02 20:14:31.228701 | orchestrator | Monday 02 June 2025 20:12:46 +0000 (0:00:07.163) 0:00:12.830 *********** 2025-06-02 20:14:31.228705 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:14:31.228710 | orchestrator | 2025-06-02 20:14:31.228714 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-02 20:14:31.228718 | orchestrator | Monday 02 June 2025 20:12:49 +0000 (0:00:03.223) 0:00:16.053 *********** 2025-06-02 20:14:31.228723 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:14:31.228727 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-02 20:14:31.228732 | orchestrator | 2025-06-02 20:14:31.228736 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-02 20:14:31.228740 | orchestrator | Monday 02 June 2025 20:12:54 +0000 (0:00:04.097) 0:00:20.150 *********** 2025-06-02 20:14:31.228745 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:14:31.228749 | orchestrator | 2025-06-02 20:14:31.228754 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-02 20:14:31.228758 | orchestrator | Monday 02 June 2025 20:12:57 +0000 (0:00:03.640) 0:00:23.790 *********** 2025-06-02 20:14:31.228774 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-02 20:14:31.228778 | orchestrator | 2025-06-02 20:14:31.228787 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-02 20:14:31.228792 | orchestrator | Monday 02 June 2025 20:13:01 +0000 (0:00:03.943) 0:00:27.734 *********** 2025-06-02 20:14:31.228796 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:14:31.228801 | orchestrator | 2025-06-02 20:14:31.228812 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-02 20:14:31.228821 | orchestrator | Monday 02 June 2025 20:13:05 +0000 (0:00:03.629) 0:00:31.364 *********** 2025-06-02 20:14:31.228826 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:14:31.228830 | orchestrator | 2025-06-02 20:14:31.228835 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-02 20:14:31.228839 | orchestrator | Monday 02 June 2025 20:13:09 +0000 (0:00:04.186) 0:00:35.551 *********** 2025-06-02 20:14:31.228843 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:14:31.228848 | orchestrator | 2025-06-02 20:14:31.228852 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-02 20:14:31.228857 | orchestrator | Monday 02 June 2025 20:13:13 +0000 (0:00:03.906) 0:00:39.457 *********** 2025-06-02 20:14:31.228867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.228875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.228887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.228893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.228908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.228916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.228921 | orchestrator | 2025-06-02 20:14:31.228925 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-02 20:14:31.228930 | orchestrator | Monday 02 June 2025 20:13:15 +0000 (0:00:02.090) 0:00:41.548 *********** 2025-06-02 20:14:31.228935 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:14:31.228939 | orchestrator | 2025-06-02 20:14:31.228944 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-02 20:14:31.228948 | orchestrator | Monday 02 June 2025 20:13:15 +0000 (0:00:00.238) 0:00:41.786 *********** 2025-06-02 20:14:31.228953 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:14:31.228957 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:14:31.228961 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:14:31.228966 | orchestrator | 2025-06-02 20:14:31.228970 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-02 20:14:31.228975 | orchestrator | Monday 02 June 2025 20:13:16 +0000 (0:00:00.768) 0:00:42.554 *********** 2025-06-02 20:14:31.228979 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:14:31.228984 | orchestrator | 2025-06-02 20:14:31.228988 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-02 20:14:31.228992 | orchestrator | Monday 02 June 2025 20:13:17 +0000 (0:00:01.045) 0:00:43.599 *********** 2025-06-02 20:14:31.228997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229040 | orchestrator | 2025-06-02 20:14:31.229044 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-02 20:14:31.229049 | orchestrator | Monday 02 June 2025 20:13:20 +0000 (0:00:03.192) 0:00:46.791 *********** 2025-06-02 20:14:31.229053 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:14:31.229058 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:14:31.229062 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:14:31.229067 | orchestrator | 2025-06-02 20:14:31.229071 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 20:14:31.229076 | orchestrator | Monday 02 June 2025 20:13:20 +0000 (0:00:00.252) 0:00:47.044 *********** 2025-06-02 20:14:31.229080 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:14:31.229085 | orchestrator | 2025-06-02 20:14:31.229089 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-02 20:14:31.229094 | orchestrator | Monday 02 June 2025 20:13:21 +0000 (0:00:00.573) 0:00:47.617 *********** 2025-06-02 20:14:31.229105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229143 | orchestrator | 2025-06-02 20:14:31.229150 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-02 20:14:31.229155 | orchestrator | Monday 02 June 2025 20:13:23 +0000 (0:00:02.175) 0:00:49.793 *********** 2025-06-02 20:14:31.229160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:14:31.229164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:14:31.229181 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:14:31.229189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:14:31.229202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:14:31.229210 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:14:31.229222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:14:31.229231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:14:31.229241 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:14:31.229246 | orchestrator | 2025-06-02 20:14:31.229250 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-02 20:14:31.229255 | orchestrator | Monday 02 June 2025 20:13:24 +0000 (0:00:00.498) 0:00:50.291 *********** 2025-06-02 20:14:31.229260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:14:31.229264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:14:31.229269 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:14:31.229278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/',2025-06-02 20:14:31 | INFO  | Task 7364c8d4-18a3-43e8-b726-2d7f29a8a919 is in state SUCCESS 2025-06-02 20:14:31.229287 | orchestrator | ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:14:31.229300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:14:31.229309 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:14:31.229314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:14:31.229319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:14:31.229324 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:14:31.229328 | orchestrator | 2025-06-02 20:14:31.229333 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-02 20:14:31.229337 | orchestrator | Monday 02 June 2025 20:13:25 +0000 (0:00:00.980) 0:00:51.271 *********** 2025-06-02 20:14:31.229345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229386 | orchestrator | 2025-06-02 20:14:31.229390 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-02 20:14:31.229395 | orchestrator | Monday 02 June 2025 20:13:27 +0000 (0:00:02.207) 0:00:53.479 *********** 2025-06-02 20:14:31.229402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229446 | orchestrator | 2025-06-02 20:14:31.229451 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-02 20:14:31.229455 | orchestrator | Monday 02 June 2025 20:13:32 +0000 (0:00:04.845) 0:00:58.324 *********** 2025-06-02 20:14:31.229460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:14:31.229465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:14:31.229470 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:14:31.229474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:14:31.229482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:14:31.229494 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:14:31.229499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 20:14:31.229547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:14:31.229554 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:14:31.229559 | orchestrator | 2025-06-02 20:14:31.229563 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-02 20:14:31.229568 | orchestrator | Monday 02 June 2025 20:13:33 +0000 (0:00:01.670) 0:00:59.995 *********** 2025-06-02 20:14:31.229572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 20:14:31.229600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:14:31.229614 | orchestrator | 2025-06-02 20:14:31.229619 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 20:14:31.229623 | orchestrator | Monday 02 June 2025 20:13:35 +0000 (0:00:02.012) 0:01:02.008 *********** 2025-06-02 20:14:31.229628 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:14:31.229632 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:14:31.229637 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:14:31.229641 | orchestrator | 2025-06-02 20:14:31.229646 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-02 20:14:31.229650 | orchestrator | Monday 02 June 2025 20:13:36 +0000 (0:00:00.346) 0:01:02.354 *********** 2025-06-02 20:14:31.229655 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:14:31.229659 | orchestrator | 2025-06-02 20:14:31.229669 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-02 20:14:31.229673 | orchestrator | Monday 02 June 2025 20:13:38 +0000 (0:00:02.134) 0:01:04.489 *********** 2025-06-02 20:14:31.229678 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:14:31.229682 | orchestrator | 2025-06-02 20:14:31.229691 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-02 20:14:31.229696 | orchestrator | Monday 02 June 2025 20:13:40 +0000 (0:00:02.301) 0:01:06.790 *********** 2025-06-02 20:14:31.229700 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:14:31.229704 | orchestrator | 2025-06-02 20:14:31.229709 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 20:14:31.229713 | orchestrator | Monday 02 June 2025 20:13:55 +0000 (0:00:14.985) 0:01:21.776 *********** 2025-06-02 20:14:31.229718 | orchestrator | 2025-06-02 20:14:31.229722 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 20:14:31.229727 | orchestrator | Monday 02 June 2025 20:13:55 +0000 (0:00:00.058) 0:01:21.834 *********** 2025-06-02 20:14:31.229731 | orchestrator | 2025-06-02 20:14:31.229735 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 20:14:31.229743 | orchestrator | Monday 02 June 2025 20:13:55 +0000 (0:00:00.107) 0:01:21.941 *********** 2025-06-02 20:14:31.229747 | orchestrator | 2025-06-02 20:14:31.229751 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-02 20:14:31.229756 | orchestrator | Monday 02 June 2025 20:13:55 +0000 (0:00:00.143) 0:01:22.085 *********** 2025-06-02 20:14:31.229760 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:14:31.229765 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:14:31.229769 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:14:31.229774 | orchestrator | 2025-06-02 20:14:31.229778 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-02 20:14:31.229782 | orchestrator | Monday 02 June 2025 20:14:10 +0000 (0:00:14.157) 0:01:36.242 *********** 2025-06-02 20:14:31.229787 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:14:31.229791 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:14:31.229796 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:14:31.229800 | orchestrator | 2025-06-02 20:14:31.229805 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:14:31.229809 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 20:14:31.229814 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:14:31.229819 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 20:14:31.229823 | orchestrator | 2025-06-02 20:14:31.229828 | orchestrator | 2025-06-02 20:14:31.229832 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:14:31.229837 | orchestrator | Monday 02 June 2025 20:14:28 +0000 (0:00:18.070) 0:01:54.313 *********** 2025-06-02 20:14:31.229841 | orchestrator | =============================================================================== 2025-06-02 20:14:31.229846 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 18.07s 2025-06-02 20:14:31.229850 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.99s 2025-06-02 20:14:31.229854 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.16s 2025-06-02 20:14:31.229859 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.16s 2025-06-02 20:14:31.229863 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.85s 2025-06-02 20:14:31.229868 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.19s 2025-06-02 20:14:31.229872 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.10s 2025-06-02 20:14:31.229880 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.94s 2025-06-02 20:14:31.229885 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.91s 2025-06-02 20:14:31.229889 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.82s 2025-06-02 20:14:31.229894 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.64s 2025-06-02 20:14:31.229898 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.63s 2025-06-02 20:14:31.229903 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.22s 2025-06-02 20:14:31.229907 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.19s 2025-06-02 20:14:31.229911 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.30s 2025-06-02 20:14:31.229916 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.21s 2025-06-02 20:14:31.229920 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.18s 2025-06-02 20:14:31.229924 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.13s 2025-06-02 20:14:31.229929 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.09s 2025-06-02 20:14:31.229933 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.01s 2025-06-02 20:14:31.229938 | orchestrator | 2025-06-02 20:14:31 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:31.230285 | orchestrator | 2025-06-02 20:14:31 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:31.232328 | orchestrator | 2025-06-02 20:14:31 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:14:31.232376 | orchestrator | 2025-06-02 20:14:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:34.274230 | orchestrator | 2025-06-02 20:14:34 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:34.274632 | orchestrator | 2025-06-02 20:14:34 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:34.277834 | orchestrator | 2025-06-02 20:14:34 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:34.279752 | orchestrator | 2025-06-02 20:14:34 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:14:34.280220 | orchestrator | 2025-06-02 20:14:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:37.319075 | orchestrator | 2025-06-02 20:14:37 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:37.320673 | orchestrator | 2025-06-02 20:14:37 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:37.323153 | orchestrator | 2025-06-02 20:14:37 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:37.324711 | orchestrator | 2025-06-02 20:14:37 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:14:37.324839 | orchestrator | 2025-06-02 20:14:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:40.357910 | orchestrator | 2025-06-02 20:14:40 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:40.358691 | orchestrator | 2025-06-02 20:14:40 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:40.359207 | orchestrator | 2025-06-02 20:14:40 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:40.359913 | orchestrator | 2025-06-02 20:14:40 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:14:40.359993 | orchestrator | 2025-06-02 20:14:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:43.412242 | orchestrator | 2025-06-02 20:14:43 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:43.413740 | orchestrator | 2025-06-02 20:14:43 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:43.414405 | orchestrator | 2025-06-02 20:14:43 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:43.415829 | orchestrator | 2025-06-02 20:14:43 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:14:43.415863 | orchestrator | 2025-06-02 20:14:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:46.451604 | orchestrator | 2025-06-02 20:14:46 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:46.452881 | orchestrator | 2025-06-02 20:14:46 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:46.454683 | orchestrator | 2025-06-02 20:14:46 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:46.457046 | orchestrator | 2025-06-02 20:14:46 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:14:46.457102 | orchestrator | 2025-06-02 20:14:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:49.494434 | orchestrator | 2025-06-02 20:14:49 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:49.494623 | orchestrator | 2025-06-02 20:14:49 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:49.495475 | orchestrator | 2025-06-02 20:14:49 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:49.496131 | orchestrator | 2025-06-02 20:14:49 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:14:49.496148 | orchestrator | 2025-06-02 20:14:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:52.530914 | orchestrator | 2025-06-02 20:14:52 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:52.531995 | orchestrator | 2025-06-02 20:14:52 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:52.532689 | orchestrator | 2025-06-02 20:14:52 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:52.533859 | orchestrator | 2025-06-02 20:14:52 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:14:52.533936 | orchestrator | 2025-06-02 20:14:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:55.573813 | orchestrator | 2025-06-02 20:14:55 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:55.574834 | orchestrator | 2025-06-02 20:14:55 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:55.576182 | orchestrator | 2025-06-02 20:14:55 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:55.578010 | orchestrator | 2025-06-02 20:14:55 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:14:55.578097 | orchestrator | 2025-06-02 20:14:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:14:58.626114 | orchestrator | 2025-06-02 20:14:58 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:14:58.626262 | orchestrator | 2025-06-02 20:14:58 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:14:58.626583 | orchestrator | 2025-06-02 20:14:58 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:14:58.627265 | orchestrator | 2025-06-02 20:14:58 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:14:58.627330 | orchestrator | 2025-06-02 20:14:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:01.664453 | orchestrator | 2025-06-02 20:15:01 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:01.665096 | orchestrator | 2025-06-02 20:15:01 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:15:01.666702 | orchestrator | 2025-06-02 20:15:01 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:01.668140 | orchestrator | 2025-06-02 20:15:01 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:01.668184 | orchestrator | 2025-06-02 20:15:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:04.721146 | orchestrator | 2025-06-02 20:15:04 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:04.721665 | orchestrator | 2025-06-02 20:15:04 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state STARTED 2025-06-02 20:15:04.722340 | orchestrator | 2025-06-02 20:15:04 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:04.723294 | orchestrator | 2025-06-02 20:15:04 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:04.723353 | orchestrator | 2025-06-02 20:15:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:07.765308 | orchestrator | 2025-06-02 20:15:07 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:07.767804 | orchestrator | 2025-06-02 20:15:07 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:07.769898 | orchestrator | 2025-06-02 20:15:07 | INFO  | Task 608c3d3d-0110-4202-aeb0-6f352d204193 is in state SUCCESS 2025-06-02 20:15:07.771202 | orchestrator | 2025-06-02 20:15:07.771267 | orchestrator | 2025-06-02 20:15:07.771283 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:15:07.771314 | orchestrator | 2025-06-02 20:15:07.771321 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:15:07.771327 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.299) 0:00:00.299 *********** 2025-06-02 20:15:07.771334 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:15:07.771341 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:15:07.771347 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:15:07.771353 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:15:07.771359 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:15:07.771365 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:15:07.771370 | orchestrator | 2025-06-02 20:15:07.771376 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:15:07.771382 | orchestrator | Monday 02 June 2025 20:10:26 +0000 (0:00:00.794) 0:00:01.094 *********** 2025-06-02 20:15:07.771388 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-02 20:15:07.771394 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-02 20:15:07.771416 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-02 20:15:07.771422 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-02 20:15:07.771428 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-02 20:15:07.771434 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-02 20:15:07.771440 | orchestrator | 2025-06-02 20:15:07.771445 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-02 20:15:07.771451 | orchestrator | 2025-06-02 20:15:07.771469 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 20:15:07.771502 | orchestrator | Monday 02 June 2025 20:10:27 +0000 (0:00:00.564) 0:00:01.659 *********** 2025-06-02 20:15:07.771566 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:15:07.771574 | orchestrator | 2025-06-02 20:15:07.771580 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-02 20:15:07.771586 | orchestrator | Monday 02 June 2025 20:10:28 +0000 (0:00:01.242) 0:00:02.901 *********** 2025-06-02 20:15:07.771592 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:15:07.771598 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:15:07.771603 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:15:07.771609 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:15:07.771615 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:15:07.771620 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:15:07.771626 | orchestrator | 2025-06-02 20:15:07.771632 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-02 20:15:07.771637 | orchestrator | Monday 02 June 2025 20:10:29 +0000 (0:00:01.135) 0:00:04.037 *********** 2025-06-02 20:15:07.771643 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:15:07.771649 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:15:07.771654 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:15:07.771660 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:15:07.771666 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:15:07.771671 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:15:07.771677 | orchestrator | 2025-06-02 20:15:07.771692 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-02 20:15:07.771699 | orchestrator | Monday 02 June 2025 20:10:30 +0000 (0:00:01.031) 0:00:05.069 *********** 2025-06-02 20:15:07.771704 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 20:15:07.771711 | orchestrator |  "changed": false, 2025-06-02 20:15:07.771717 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:15:07.771723 | orchestrator | } 2025-06-02 20:15:07.771729 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 20:15:07.771735 | orchestrator |  "changed": false, 2025-06-02 20:15:07.771741 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:15:07.771747 | orchestrator | } 2025-06-02 20:15:07.771752 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 20:15:07.771758 | orchestrator |  "changed": false, 2025-06-02 20:15:07.771764 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:15:07.771770 | orchestrator | } 2025-06-02 20:15:07.771775 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 20:15:07.771781 | orchestrator |  "changed": false, 2025-06-02 20:15:07.771786 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:15:07.771792 | orchestrator | } 2025-06-02 20:15:07.771798 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 20:15:07.771805 | orchestrator |  "changed": false, 2025-06-02 20:15:07.771812 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:15:07.771818 | orchestrator | } 2025-06-02 20:15:07.771825 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 20:15:07.771831 | orchestrator |  "changed": false, 2025-06-02 20:15:07.771838 | orchestrator |  "msg": "All assertions passed" 2025-06-02 20:15:07.771845 | orchestrator | } 2025-06-02 20:15:07.771851 | orchestrator | 2025-06-02 20:15:07.771858 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-02 20:15:07.771865 | orchestrator | Monday 02 June 2025 20:10:31 +0000 (0:00:00.873) 0:00:05.942 *********** 2025-06-02 20:15:07.771872 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.771879 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.771885 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.771892 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.771898 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.771903 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.771909 | orchestrator | 2025-06-02 20:15:07.771915 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-02 20:15:07.771920 | orchestrator | Monday 02 June 2025 20:10:32 +0000 (0:00:00.565) 0:00:06.508 *********** 2025-06-02 20:15:07.771932 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-02 20:15:07.771937 | orchestrator | 2025-06-02 20:15:07.771943 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-02 20:15:07.771949 | orchestrator | Monday 02 June 2025 20:10:35 +0000 (0:00:03.365) 0:00:09.873 *********** 2025-06-02 20:15:07.771955 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-02 20:15:07.771961 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-02 20:15:07.771967 | orchestrator | 2025-06-02 20:15:07.771983 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-02 20:15:07.771989 | orchestrator | Monday 02 June 2025 20:10:42 +0000 (0:00:06.561) 0:00:16.435 *********** 2025-06-02 20:15:07.771995 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:15:07.772001 | orchestrator | 2025-06-02 20:15:07.772006 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-02 20:15:07.772012 | orchestrator | Monday 02 June 2025 20:10:45 +0000 (0:00:03.391) 0:00:19.827 *********** 2025-06-02 20:15:07.772018 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:15:07.772024 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-02 20:15:07.772030 | orchestrator | 2025-06-02 20:15:07.772035 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-02 20:15:07.772041 | orchestrator | Monday 02 June 2025 20:10:49 +0000 (0:00:04.224) 0:00:24.051 *********** 2025-06-02 20:15:07.772047 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:15:07.772052 | orchestrator | 2025-06-02 20:15:07.772058 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-02 20:15:07.772064 | orchestrator | Monday 02 June 2025 20:10:53 +0000 (0:00:03.548) 0:00:27.599 *********** 2025-06-02 20:15:07.772070 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-02 20:15:07.772075 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-02 20:15:07.772081 | orchestrator | 2025-06-02 20:15:07.772087 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 20:15:07.772092 | orchestrator | Monday 02 June 2025 20:11:00 +0000 (0:00:07.255) 0:00:34.855 *********** 2025-06-02 20:15:07.772098 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.772104 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.772109 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.772115 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.772120 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.772126 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.772132 | orchestrator | 2025-06-02 20:15:07.772137 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-02 20:15:07.772147 | orchestrator | Monday 02 June 2025 20:11:01 +0000 (0:00:00.742) 0:00:35.597 *********** 2025-06-02 20:15:07.772158 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.772174 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.772183 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.772198 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.772208 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.772218 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.772227 | orchestrator | 2025-06-02 20:15:07.772236 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-02 20:15:07.772245 | orchestrator | Monday 02 June 2025 20:11:03 +0000 (0:00:02.494) 0:00:38.092 *********** 2025-06-02 20:15:07.772254 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:15:07.772265 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:15:07.772274 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:15:07.772284 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:15:07.772295 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:15:07.772306 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:15:07.772325 | orchestrator | 2025-06-02 20:15:07.772336 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 20:15:07.772342 | orchestrator | Monday 02 June 2025 20:11:05 +0000 (0:00:01.335) 0:00:39.428 *********** 2025-06-02 20:15:07.772348 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.772354 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.772360 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.772366 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.772371 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.772377 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.772383 | orchestrator | 2025-06-02 20:15:07.772388 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-02 20:15:07.772394 | orchestrator | Monday 02 June 2025 20:11:08 +0000 (0:00:02.814) 0:00:42.243 *********** 2025-06-02 20:15:07.772403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.772421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.772429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.772436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.772450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.772457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.772463 | orchestrator | 2025-06-02 20:15:07.772469 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-02 20:15:07.772506 | orchestrator | Monday 02 June 2025 20:11:11 +0000 (0:00:03.340) 0:00:45.583 *********** 2025-06-02 20:15:07.772517 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:07.772528 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-02 20:15:07.772538 | orchestrator | due to this access issue: 2025-06-02 20:15:07.772547 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-02 20:15:07.772553 | orchestrator | a directory 2025-06-02 20:15:07.772559 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:15:07.772565 | orchestrator | 2025-06-02 20:15:07.772575 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 20:15:07.772581 | orchestrator | Monday 02 June 2025 20:11:12 +0000 (0:00:00.893) 0:00:46.477 *********** 2025-06-02 20:15:07.772587 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:15:07.772594 | orchestrator | 2025-06-02 20:15:07.772611 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-02 20:15:07.772616 | orchestrator | Monday 02 June 2025 20:11:13 +0000 (0:00:01.367) 0:00:47.844 *********** 2025-06-02 20:15:07.772630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.772646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.772654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.772664 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.772676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.772682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.772694 | orchestrator | 2025-06-02 20:15:07.772700 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-02 20:15:07.772705 | orchestrator | Monday 02 June 2025 20:11:18 +0000 (0:00:04.427) 0:00:52.272 *********** 2025-06-02 20:15:07.772715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.772721 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.772727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.772733 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.772743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.772749 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.772755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.772766 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.772772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.772778 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.772788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.772794 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.772799 | orchestrator | 2025-06-02 20:15:07.772805 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-02 20:15:07.772811 | orchestrator | Monday 02 June 2025 20:11:21 +0000 (0:00:03.110) 0:00:55.383 *********** 2025-06-02 20:15:07.772817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.772823 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.772834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.772847 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.772853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.772859 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.772871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.772877 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.772884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.772889 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.772896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.772902 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.772907 | orchestrator | 2025-06-02 20:15:07.772913 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-02 20:15:07.772922 | orchestrator | Monday 02 June 2025 20:11:24 +0000 (0:00:03.036) 0:00:58.420 *********** 2025-06-02 20:15:07.772933 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.772938 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.772944 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.772950 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.772955 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.772961 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.772967 | orchestrator | 2025-06-02 20:15:07.772972 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-02 20:15:07.772978 | orchestrator | Monday 02 June 2025 20:11:27 +0000 (0:00:03.163) 0:01:01.583 *********** 2025-06-02 20:15:07.772984 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.772989 | orchestrator | 2025-06-02 20:15:07.772995 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-02 20:15:07.773001 | orchestrator | Monday 02 June 2025 20:11:27 +0000 (0:00:00.101) 0:01:01.684 *********** 2025-06-02 20:15:07.773007 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.773013 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.773018 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.773024 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.773029 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.773035 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.773041 | orchestrator | 2025-06-02 20:15:07.773047 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-02 20:15:07.773052 | orchestrator | Monday 02 June 2025 20:11:28 +0000 (0:00:00.576) 0:01:02.261 *********** 2025-06-02 20:15:07.773058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.773065 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.773077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.773088 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.773095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.773105 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.773319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.773340 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.773346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.773352 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.773363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.773369 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.773375 | orchestrator | 2025-06-02 20:15:07.773381 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-02 20:15:07.773387 | orchestrator | Monday 02 June 2025 20:11:31 +0000 (0:00:03.063) 0:01:05.324 *********** 2025-06-02 20:15:07.773393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.773423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.773438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.773449 | orchestrator | 2025-06-02 20:15:07.773455 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-02 20:15:07.773460 | orchestrator | Monday 02 June 2025 20:11:35 +0000 (0:00:04.510) 0:01:09.835 *********** 2025-06-02 20:15:07.773470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.773521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.773531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.773537 | orchestrator | 2025-06-02 20:15:07.773543 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-02 20:15:07.773549 | orchestrator | Monday 02 June 2025 20:11:41 +0000 (0:00:05.518) 0:01:15.353 *********** 2025-06-02 20:15:07.773556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.773566 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.773585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.773613 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.773624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.773659 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.773672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773686 | orchestrator | 2025-06-02 20:15:07.773691 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-02 20:15:07.773697 | orchestrator | Monday 02 June 2025 20:11:44 +0000 (0:00:03.520) 0:01:18.874 *********** 2025-06-02 20:15:07.773703 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.773709 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.773715 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.773720 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:07.773726 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:07.773732 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:07.773737 | orchestrator | 2025-06-02 20:15:07.773743 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-02 20:15:07.773754 | orchestrator | Monday 02 June 2025 20:11:47 +0000 (0:00:02.707) 0:01:21.582 *********** 2025-06-02 20:15:07.773763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.773769 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.773775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.773781 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.773792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.773798 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.773804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.773832 | orchestrator | 2025-06-02 20:15:07.773837 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-02 20:15:07.773843 | orchestrator | Monday 02 June 2025 20:11:51 +0000 (0:00:04.008) 0:01:25.590 *********** 2025-06-02 20:15:07.773849 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.773855 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.773860 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.773866 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.773872 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.773877 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.773883 | orchestrator | 2025-06-02 20:15:07.773889 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-02 20:15:07.773894 | orchestrator | Monday 02 June 2025 20:11:53 +0000 (0:00:02.167) 0:01:27.758 *********** 2025-06-02 20:15:07.773900 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.773906 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.773912 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.773917 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.773923 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.773929 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.773936 | orchestrator | 2025-06-02 20:15:07.773943 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-02 20:15:07.773950 | orchestrator | Monday 02 June 2025 20:11:56 +0000 (0:00:02.795) 0:01:30.553 *********** 2025-06-02 20:15:07.773957 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.773963 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.773970 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.773979 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.773986 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.773993 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.773999 | orchestrator | 2025-06-02 20:15:07.774006 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-02 20:15:07.774050 | orchestrator | Monday 02 June 2025 20:11:59 +0000 (0:00:02.948) 0:01:33.502 *********** 2025-06-02 20:15:07.774067 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774077 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774088 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774098 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.774108 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.774119 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.774129 | orchestrator | 2025-06-02 20:15:07.774140 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-02 20:15:07.774160 | orchestrator | Monday 02 June 2025 20:12:01 +0000 (0:00:01.810) 0:01:35.313 *********** 2025-06-02 20:15:07.774170 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774176 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774183 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774190 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.774196 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.774203 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.774210 | orchestrator | 2025-06-02 20:15:07.774216 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-02 20:15:07.774224 | orchestrator | Monday 02 June 2025 20:12:03 +0000 (0:00:01.905) 0:01:37.218 *********** 2025-06-02 20:15:07.774230 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774237 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.774243 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.774250 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774256 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774263 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.774269 | orchestrator | 2025-06-02 20:15:07.774278 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-02 20:15:07.774288 | orchestrator | Monday 02 June 2025 20:12:05 +0000 (0:00:02.092) 0:01:39.311 *********** 2025-06-02 20:15:07.774297 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:15:07.774308 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774318 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:15:07.774328 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774339 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:15:07.774350 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774358 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:15:07.774365 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.774372 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:15:07.774378 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.774387 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 20:15:07.774393 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.774399 | orchestrator | 2025-06-02 20:15:07.774405 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-02 20:15:07.774410 | orchestrator | Monday 02 June 2025 20:12:07 +0000 (0:00:02.442) 0:01:41.754 *********** 2025-06-02 20:15:07.774417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.774423 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.774447 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.774459 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.774472 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.774608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.774621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.774636 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.774642 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.774647 | orchestrator | 2025-06-02 20:15:07.774653 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-02 20:15:07.774659 | orchestrator | Monday 02 June 2025 20:12:10 +0000 (0:00:03.156) 0:01:44.911 *********** 2025-06-02 20:15:07.774672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.774678 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.774688 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.774701 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.774716 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.774722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.774727 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.774736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.774741 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.774746 | orchestrator | 2025-06-02 20:15:07.774750 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-02 20:15:07.774755 | orchestrator | Monday 02 June 2025 20:12:12 +0000 (0:00:02.174) 0:01:47.085 *********** 2025-06-02 20:15:07.774760 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774765 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774769 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774774 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.774779 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.774783 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.774788 | orchestrator | 2025-06-02 20:15:07.774793 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-02 20:15:07.774798 | orchestrator | Monday 02 June 2025 20:12:15 +0000 (0:00:02.951) 0:01:50.036 *********** 2025-06-02 20:15:07.774802 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774807 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774812 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774816 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:15:07.774821 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:15:07.774826 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:15:07.774831 | orchestrator | 2025-06-02 20:15:07.774835 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-02 20:15:07.774840 | orchestrator | Monday 02 June 2025 20:12:20 +0000 (0:00:05.008) 0:01:55.045 *********** 2025-06-02 20:15:07.774845 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774850 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774854 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774859 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.774864 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.774868 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.774873 | orchestrator | 2025-06-02 20:15:07.774878 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-02 20:15:07.774887 | orchestrator | Monday 02 June 2025 20:12:22 +0000 (0:00:01.805) 0:01:56.851 *********** 2025-06-02 20:15:07.774892 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774897 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774902 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774906 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.774911 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.774916 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.774920 | orchestrator | 2025-06-02 20:15:07.774925 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-02 20:15:07.774930 | orchestrator | Monday 02 June 2025 20:12:24 +0000 (0:00:01.826) 0:01:58.678 *********** 2025-06-02 20:15:07.774935 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774940 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774944 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774949 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.774954 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.774959 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.774963 | orchestrator | 2025-06-02 20:15:07.774968 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-02 20:15:07.774973 | orchestrator | Monday 02 June 2025 20:12:26 +0000 (0:00:02.060) 0:02:00.738 *********** 2025-06-02 20:15:07.774978 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.774983 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.774988 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.774993 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.774997 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.775002 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.775007 | orchestrator | 2025-06-02 20:15:07.775011 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-02 20:15:07.775016 | orchestrator | Monday 02 June 2025 20:12:28 +0000 (0:00:02.259) 0:02:02.998 *********** 2025-06-02 20:15:07.775021 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.775026 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.775030 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.775035 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.775039 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.775044 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.775049 | orchestrator | 2025-06-02 20:15:07.775054 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-02 20:15:07.775058 | orchestrator | Monday 02 June 2025 20:12:32 +0000 (0:00:03.432) 0:02:06.431 *********** 2025-06-02 20:15:07.775063 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.775068 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.775073 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.775088 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.775093 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.775097 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.775109 | orchestrator | 2025-06-02 20:15:07.775114 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-02 20:15:07.775146 | orchestrator | Monday 02 June 2025 20:12:34 +0000 (0:00:02.063) 0:02:08.494 *********** 2025-06-02 20:15:07.775152 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.775161 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.775165 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.775170 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.775175 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.775180 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.775185 | orchestrator | 2025-06-02 20:15:07.775190 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-02 20:15:07.775195 | orchestrator | Monday 02 June 2025 20:12:36 +0000 (0:00:02.311) 0:02:10.806 *********** 2025-06-02 20:15:07.775200 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.775208 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.775213 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.775217 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.775222 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.775227 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.775232 | orchestrator | 2025-06-02 20:15:07.775237 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-02 20:15:07.775241 | orchestrator | Monday 02 June 2025 20:12:38 +0000 (0:00:01.950) 0:02:12.756 *********** 2025-06-02 20:15:07.775246 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:15:07.775252 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.775256 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:15:07.775261 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.775266 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:15:07.775271 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.775276 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:15:07.775280 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.775286 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:15:07.775290 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.775295 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 20:15:07.775300 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.775305 | orchestrator | 2025-06-02 20:15:07.775310 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-02 20:15:07.775314 | orchestrator | Monday 02 June 2025 20:12:40 +0000 (0:00:02.394) 0:02:15.151 *********** 2025-06-02 20:15:07.775322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.775328 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.775333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.775338 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.775352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 20:15:07.775357 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.775362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.775367 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.775373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.775380 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.775386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 20:15:07.775391 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.775395 | orchestrator | 2025-06-02 20:15:07.775400 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-02 20:15:07.775405 | orchestrator | Monday 02 June 2025 20:12:43 +0000 (0:00:02.625) 0:02:17.776 *********** 2025-06-02 20:15:07.775410 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.775424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.775430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.775440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.775446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 20:15:07.775455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 20:15:07.775460 | orchestrator | 2025-06-02 20:15:07.775465 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 20:15:07.775492 | orchestrator | Monday 02 June 2025 20:12:46 +0000 (0:00:03.305) 0:02:21.082 *********** 2025-06-02 20:15:07.775502 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:07.775507 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:07.775512 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:07.775517 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:07.775522 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:07.775527 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:07.775532 | orchestrator | 2025-06-02 20:15:07.775537 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-02 20:15:07.775541 | orchestrator | Monday 02 June 2025 20:12:47 +0000 (0:00:00.726) 0:02:21.808 *********** 2025-06-02 20:15:07.775546 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:07.775551 | orchestrator | 2025-06-02 20:15:07.775556 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-02 20:15:07.775561 | orchestrator | Monday 02 June 2025 20:12:49 +0000 (0:00:02.177) 0:02:23.986 *********** 2025-06-02 20:15:07.775565 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:07.775570 | orchestrator | 2025-06-02 20:15:07.775575 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-02 20:15:07.775580 | orchestrator | Monday 02 June 2025 20:12:52 +0000 (0:00:02.303) 0:02:26.290 *********** 2025-06-02 20:15:07.775585 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:07.775590 | orchestrator | 2025-06-02 20:15:07.775595 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:15:07.775599 | orchestrator | Monday 02 June 2025 20:13:34 +0000 (0:00:42.100) 0:03:08.391 *********** 2025-06-02 20:15:07.775604 | orchestrator | 2025-06-02 20:15:07.775609 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:15:07.775614 | orchestrator | Monday 02 June 2025 20:13:34 +0000 (0:00:00.099) 0:03:08.490 *********** 2025-06-02 20:15:07.775618 | orchestrator | 2025-06-02 20:15:07.775623 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:15:07.775628 | orchestrator | Monday 02 June 2025 20:13:34 +0000 (0:00:00.312) 0:03:08.803 *********** 2025-06-02 20:15:07.775633 | orchestrator | 2025-06-02 20:15:07.775637 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:15:07.775642 | orchestrator | Monday 02 June 2025 20:13:34 +0000 (0:00:00.048) 0:03:08.851 *********** 2025-06-02 20:15:07.775647 | orchestrator | 2025-06-02 20:15:07.775652 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:15:07.775657 | orchestrator | Monday 02 June 2025 20:13:34 +0000 (0:00:00.047) 0:03:08.899 *********** 2025-06-02 20:15:07.775662 | orchestrator | 2025-06-02 20:15:07.775667 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 20:15:07.775697 | orchestrator | Monday 02 June 2025 20:13:34 +0000 (0:00:00.049) 0:03:08.949 *********** 2025-06-02 20:15:07.775702 | orchestrator | 2025-06-02 20:15:07.775707 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-02 20:15:07.775712 | orchestrator | Monday 02 June 2025 20:13:34 +0000 (0:00:00.049) 0:03:08.998 *********** 2025-06-02 20:15:07.775724 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:07.775732 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:07.775737 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:07.775742 | orchestrator | 2025-06-02 20:15:07.775747 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-02 20:15:07.775752 | orchestrator | Monday 02 June 2025 20:14:05 +0000 (0:00:31.057) 0:03:40.055 *********** 2025-06-02 20:15:07.775757 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:15:07.775762 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:15:07.775766 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:15:07.775771 | orchestrator | 2025-06-02 20:15:07.775776 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:15:07.775781 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 20:15:07.775788 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 20:15:07.775793 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 20:15:07.775798 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 20:15:07.775803 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 20:15:07.775808 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 20:15:07.775813 | orchestrator | 2025-06-02 20:15:07.775818 | orchestrator | 2025-06-02 20:15:07.775822 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:15:07.775827 | orchestrator | Monday 02 June 2025 20:15:06 +0000 (0:01:00.128) 0:04:40.184 *********** 2025-06-02 20:15:07.775832 | orchestrator | =============================================================================== 2025-06-02 20:15:07.775837 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 60.13s 2025-06-02 20:15:07.775842 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.10s 2025-06-02 20:15:07.775847 | orchestrator | neutron : Restart neutron-server container ----------------------------- 31.06s 2025-06-02 20:15:07.775852 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.26s 2025-06-02 20:15:07.775861 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.56s 2025-06-02 20:15:07.775866 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.52s 2025-06-02 20:15:07.775871 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.01s 2025-06-02 20:15:07.775876 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.51s 2025-06-02 20:15:07.775881 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.43s 2025-06-02 20:15:07.775885 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.22s 2025-06-02 20:15:07.775890 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.01s 2025-06-02 20:15:07.775895 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.55s 2025-06-02 20:15:07.775899 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.52s 2025-06-02 20:15:07.775904 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.43s 2025-06-02 20:15:07.775909 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.39s 2025-06-02 20:15:07.775914 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.37s 2025-06-02 20:15:07.775924 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.34s 2025-06-02 20:15:07.775928 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.31s 2025-06-02 20:15:07.775933 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.16s 2025-06-02 20:15:07.775938 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.16s 2025-06-02 20:15:07.776010 | orchestrator | 2025-06-02 20:15:07 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:07.776422 | orchestrator | 2025-06-02 20:15:07 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:07.776434 | orchestrator | 2025-06-02 20:15:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:10.807631 | orchestrator | 2025-06-02 20:15:10 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:10.809179 | orchestrator | 2025-06-02 20:15:10 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:10.809881 | orchestrator | 2025-06-02 20:15:10 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:10.811423 | orchestrator | 2025-06-02 20:15:10 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:10.811521 | orchestrator | 2025-06-02 20:15:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:13.845859 | orchestrator | 2025-06-02 20:15:13 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:13.845970 | orchestrator | 2025-06-02 20:15:13 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:13.845985 | orchestrator | 2025-06-02 20:15:13 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:13.845996 | orchestrator | 2025-06-02 20:15:13 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:13.846080 | orchestrator | 2025-06-02 20:15:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:16.873104 | orchestrator | 2025-06-02 20:15:16 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:16.873426 | orchestrator | 2025-06-02 20:15:16 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:16.875259 | orchestrator | 2025-06-02 20:15:16 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:16.876143 | orchestrator | 2025-06-02 20:15:16 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:16.876217 | orchestrator | 2025-06-02 20:15:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:19.914151 | orchestrator | 2025-06-02 20:15:19 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:19.914308 | orchestrator | 2025-06-02 20:15:19 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:19.914897 | orchestrator | 2025-06-02 20:15:19 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:19.916866 | orchestrator | 2025-06-02 20:15:19 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:19.916918 | orchestrator | 2025-06-02 20:15:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:22.949365 | orchestrator | 2025-06-02 20:15:22 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:22.949579 | orchestrator | 2025-06-02 20:15:22 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:22.950121 | orchestrator | 2025-06-02 20:15:22 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:22.950749 | orchestrator | 2025-06-02 20:15:22 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:22.950769 | orchestrator | 2025-06-02 20:15:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:25.979744 | orchestrator | 2025-06-02 20:15:25 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:25.980076 | orchestrator | 2025-06-02 20:15:25 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:25.981711 | orchestrator | 2025-06-02 20:15:25 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:25.982914 | orchestrator | 2025-06-02 20:15:25 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:25.983268 | orchestrator | 2025-06-02 20:15:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:29.027662 | orchestrator | 2025-06-02 20:15:29 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:29.027763 | orchestrator | 2025-06-02 20:15:29 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:29.028307 | orchestrator | 2025-06-02 20:15:29 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:29.028982 | orchestrator | 2025-06-02 20:15:29 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:29.029045 | orchestrator | 2025-06-02 20:15:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:32.055994 | orchestrator | 2025-06-02 20:15:32 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:32.056078 | orchestrator | 2025-06-02 20:15:32 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:32.056705 | orchestrator | 2025-06-02 20:15:32 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:32.057134 | orchestrator | 2025-06-02 20:15:32 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:32.057216 | orchestrator | 2025-06-02 20:15:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:35.091988 | orchestrator | 2025-06-02 20:15:35 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:35.097098 | orchestrator | 2025-06-02 20:15:35 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:35.099435 | orchestrator | 2025-06-02 20:15:35 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:35.099628 | orchestrator | 2025-06-02 20:15:35 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:35.099649 | orchestrator | 2025-06-02 20:15:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:38.127997 | orchestrator | 2025-06-02 20:15:38 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:38.128233 | orchestrator | 2025-06-02 20:15:38 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:38.129003 | orchestrator | 2025-06-02 20:15:38 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:38.129590 | orchestrator | 2025-06-02 20:15:38 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:38.129612 | orchestrator | 2025-06-02 20:15:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:41.162364 | orchestrator | 2025-06-02 20:15:41 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:41.162634 | orchestrator | 2025-06-02 20:15:41 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:41.163939 | orchestrator | 2025-06-02 20:15:41 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state STARTED 2025-06-02 20:15:41.163968 | orchestrator | 2025-06-02 20:15:41 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:41.163979 | orchestrator | 2025-06-02 20:15:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:44.185956 | orchestrator | 2025-06-02 20:15:44 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:44.186112 | orchestrator | 2025-06-02 20:15:44 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:44.188209 | orchestrator | 2025-06-02 20:15:44.188264 | orchestrator | 2025-06-02 20:15:44 | INFO  | Task 53a945be-f52c-4bbb-b1aa-4647c40e8c66 is in state SUCCESS 2025-06-02 20:15:44.189626 | orchestrator | 2025-06-02 20:15:44.189672 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:15:44.189688 | orchestrator | 2025-06-02 20:15:44.189699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:15:44.189710 | orchestrator | Monday 02 June 2025 20:12:42 +0000 (0:00:00.743) 0:00:00.743 *********** 2025-06-02 20:15:44.189721 | orchestrator | ok: [testbed-manager] 2025-06-02 20:15:44.189732 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:15:44.189743 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:15:44.189754 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:15:44.189764 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:15:44.189774 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:15:44.189785 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:15:44.189796 | orchestrator | 2025-06-02 20:15:44.189806 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:15:44.189818 | orchestrator | Monday 02 June 2025 20:12:43 +0000 (0:00:01.612) 0:00:02.356 *********** 2025-06-02 20:15:44.189831 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-02 20:15:44.189843 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-02 20:15:44.189854 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-02 20:15:44.189865 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-02 20:15:44.189876 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-02 20:15:44.189887 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-02 20:15:44.189896 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-02 20:15:44.189903 | orchestrator | 2025-06-02 20:15:44.189910 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-02 20:15:44.189917 | orchestrator | 2025-06-02 20:15:44.189924 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 20:15:44.189931 | orchestrator | Monday 02 June 2025 20:12:44 +0000 (0:00:01.238) 0:00:03.594 *********** 2025-06-02 20:15:44.189939 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:15:44.189947 | orchestrator | 2025-06-02 20:15:44.189953 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-02 20:15:44.190055 | orchestrator | Monday 02 June 2025 20:12:46 +0000 (0:00:01.780) 0:00:05.374 *********** 2025-06-02 20:15:44.190077 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:15:44.190119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190184 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190269 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190348 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190362 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:15:44.190376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190429 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190576 | orchestrator | 2025-06-02 20:15:44.190588 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 20:15:44.190601 | orchestrator | Monday 02 June 2025 20:12:49 +0000 (0:00:02.777) 0:00:08.152 *********** 2025-06-02 20:15:44.190612 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:15:44.190625 | orchestrator | 2025-06-02 20:15:44.190636 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-02 20:15:44.190648 | orchestrator | Monday 02 June 2025 20:12:50 +0000 (0:00:01.220) 0:00:09.372 *********** 2025-06-02 20:15:44.190665 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:15:44.190677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190748 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.190777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190831 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.190886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.190917 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:15:44.190992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.191006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.191018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.191036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.191048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.191060 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.191079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.191091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.191110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.191122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.191134 | orchestrator | 2025-06-02 20:15:44.191145 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-02 20:15:44.191157 | orchestrator | Monday 02 June 2025 20:12:56 +0000 (0:00:05.645) 0:00:15.018 *********** 2025-06-02 20:15:44.191175 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 20:15:44.191188 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.191200 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191222 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 20:15:44.191241 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.191268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191296 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:44.191316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.191323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191355 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:44.191361 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:44.191368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.191375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191413 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:44.191420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.191430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191662 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.191686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.191695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.191736 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.191742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191769 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.191776 | orchestrator | 2025-06-02 20:15:44.191796 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-02 20:15:44.191803 | orchestrator | Monday 02 June 2025 20:12:57 +0000 (0:00:01.470) 0:00:16.489 *********** 2025-06-02 20:15:44.191810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.191818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191856 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:44.191863 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 20:15:44.191874 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.191881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.191888 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191914 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 20:15:44.191922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191929 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.191951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191965 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:44.191972 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:44.191979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.191986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.191998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.192006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.192012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 20:15:44.192019 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:44.192026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.192036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.192048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.192055 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.192061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.192383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.192394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.192401 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.192408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 20:15:44.192415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.192426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 20:15:44.192440 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.192467 | orchestrator | 2025-06-02 20:15:44.192474 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-02 20:15:44.192481 | orchestrator | Monday 02 June 2025 20:12:59 +0000 (0:00:01.778) 0:00:18.267 *********** 2025-06-02 20:15:44.192488 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:15:44.192495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.192508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.192515 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.192522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.192529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.192545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.192552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.192559 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.192565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.192579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.192586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.192593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.192600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.192616 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:15:44.192624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.192631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.192643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.192650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.192657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.192669 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.192679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.192686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.192693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.192703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.192711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.192718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.192729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.192739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.192746 | orchestrator | 2025-06-02 20:15:44.192753 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-02 20:15:44.192760 | orchestrator | Monday 02 June 2025 20:13:06 +0000 (0:00:06.856) 0:00:25.124 *********** 2025-06-02 20:15:44.192767 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:15:44.192774 | orchestrator | 2025-06-02 20:15:44.192780 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-02 20:15:44.192806 | orchestrator | Monday 02 June 2025 20:13:07 +0000 (0:00:01.190) 0:00:26.314 *********** 2025-06-02 20:15:44.192813 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089709, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192820 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089709, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192831 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089709, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192839 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089709, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.192851 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089692, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7000482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192858 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089709, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192871 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089709, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192878 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089709, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192884 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089692, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7000482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192895 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089692, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7000482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192902 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089671, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192913 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089692, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7000482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192920 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089692, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7000482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192931 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089692, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7000482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192938 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089671, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192945 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089692, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7000482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.192956 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089671, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192963 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089671, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192975 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089672, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192981 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089671, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.192992 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089672, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193000 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089672, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193017 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089672, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193030 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089672, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193039 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089671, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193051 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089689, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193112 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089689, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193126 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089689, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193156 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089689, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193184 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1089671, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.193221 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089674, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193279 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089674, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193288 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089689, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193296 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089672, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193307 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089685, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193316 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089674, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193324 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089674, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193336 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089689, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193348 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089685, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193356 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089674, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193363 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089685, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193374 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089696, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193381 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089674, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193388 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089696, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193395 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089696, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193410 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1089672, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.193417 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089685, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193424 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089685, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193434 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089685, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193458 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089706, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193466 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089706, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193473 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089696, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193489 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089706, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193497 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089723, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193504 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089696, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193514 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089723, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193520 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089696, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193527 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089706, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193549 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089699, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193567 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089699, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193574 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089723, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193581 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089689, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.193592 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089723, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193599 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089706, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193606 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089673, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193618 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089706, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193628 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089673, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193636 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089683, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193643 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089699, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193653 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089723, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193660 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089670, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193667 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089699, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193678 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089673, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193690 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089723, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193697 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089683, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193704 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089699, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193714 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089674, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.193721 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089691, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193733 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089699, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.193740 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089673, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194094 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089673, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194111 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089683, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194119 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089670, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194132 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089673, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194139 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089722, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194152 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089683, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194159 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089683, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194173 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089670, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194180 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089685, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194187 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089681, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194198 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089670, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194205 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089691, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194220 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089683, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194227 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089670, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194237 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089691, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194245 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089670, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194251 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089722, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194262 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089710, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7030482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194269 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:44.194280 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089722, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194287 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089691, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194294 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089681, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194304 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089691, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194311 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089691, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194318 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089696, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194329 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089722, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194341 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089681, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194348 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089710, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7030482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194355 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.194361 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089722, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194372 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089722, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194379 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089681, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194386 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089681, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194398 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089681, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194410 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089710, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7030482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194416 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:44.194436 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089710, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7030482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194475 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.194482 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089710, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7030482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194489 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:44.194501 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089706, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.702048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194508 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089710, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7030482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 20:15:44.194515 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.194522 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089723, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194538 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089699, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7010481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194545 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089673, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.696048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194551 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089683, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194558 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089670, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.695048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194569 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089691, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.699048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194576 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089722, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7040482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194583 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089681, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.698048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194597 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089710, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.7030482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 20:15:44.194604 | orchestrator | 2025-06-02 20:15:44.194611 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-02 20:15:44.194618 | orchestrator | Monday 02 June 2025 20:13:29 +0000 (0:00:21.987) 0:00:48.302 *********** 2025-06-02 20:15:44.194624 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:15:44.194631 | orchestrator | 2025-06-02 20:15:44.194638 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-02 20:15:44.194644 | orchestrator | Monday 02 June 2025 20:13:30 +0000 (0:00:00.682) 0:00:48.984 *********** 2025-06-02 20:15:44.194651 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:44.194658 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194665 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:44.194671 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194678 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-02 20:15:44.194685 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:15:44.194691 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:44.194698 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194704 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:44.194712 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194720 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-02 20:15:44.194727 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:44.194735 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194742 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:44.194749 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194756 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-02 20:15:44.194764 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:44.194771 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194779 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:44.194786 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194794 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-02 20:15:44.194801 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:44.194809 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194820 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:44.194828 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194835 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-02 20:15:44.194842 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:44.194854 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194862 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:44.194870 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194877 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-02 20:15:44.194885 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:44.194892 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194900 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-02 20:15:44.194907 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 20:15:44.194915 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-02 20:15:44.194922 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:15:44.194930 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 20:15:44.194937 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 20:15:44.194945 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 20:15:44.194952 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 20:15:44.194960 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 20:15:44.194968 | orchestrator | 2025-06-02 20:15:44.194975 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-02 20:15:44.194983 | orchestrator | Monday 02 June 2025 20:13:32 +0000 (0:00:01.783) 0:00:50.768 *********** 2025-06-02 20:15:44.194990 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:44.194999 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:44.195007 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:44.195015 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:44.195022 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:44.195030 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:44.195037 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:44.195045 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.195053 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:44.195060 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.195068 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 20:15:44.195079 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.195087 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-02 20:15:44.195094 | orchestrator | 2025-06-02 20:15:44.195102 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-02 20:15:44.195110 | orchestrator | Monday 02 June 2025 20:13:47 +0000 (0:00:15.554) 0:01:06.323 *********** 2025-06-02 20:15:44.195117 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:44.195126 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:44.195133 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:44.195139 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:44.195146 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:44.195152 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:44.195159 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:44.195165 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.195172 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:44.195183 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.195190 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 20:15:44.195197 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.195203 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-02 20:15:44.195209 | orchestrator | 2025-06-02 20:15:44.195216 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-02 20:15:44.195222 | orchestrator | Monday 02 June 2025 20:13:50 +0000 (0:00:02.677) 0:01:09.001 *********** 2025-06-02 20:15:44.195229 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:44.195236 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:44.195243 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:44.195249 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:44.195256 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:44.195262 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:44.195272 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:44.195279 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.195285 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:44.195292 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.195298 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 20:15:44.195305 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.195311 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-02 20:15:44.195318 | orchestrator | 2025-06-02 20:15:44.195325 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-02 20:15:44.195331 | orchestrator | Monday 02 June 2025 20:13:52 +0000 (0:00:01.715) 0:01:10.716 *********** 2025-06-02 20:15:44.195338 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:15:44.195344 | orchestrator | 2025-06-02 20:15:44.195351 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-02 20:15:44.195357 | orchestrator | Monday 02 June 2025 20:13:52 +0000 (0:00:00.759) 0:01:11.476 *********** 2025-06-02 20:15:44.195364 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:44.195370 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:44.195377 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:44.195383 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:44.195390 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.195396 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.195402 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.195409 | orchestrator | 2025-06-02 20:15:44.195416 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-02 20:15:44.195422 | orchestrator | Monday 02 June 2025 20:13:53 +0000 (0:00:00.817) 0:01:12.294 *********** 2025-06-02 20:15:44.195428 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:44.195435 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.195518 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.195527 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.195533 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:44.195540 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:44.195546 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:44.195553 | orchestrator | 2025-06-02 20:15:44.195560 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-02 20:15:44.195571 | orchestrator | Monday 02 June 2025 20:13:55 +0000 (0:00:01.949) 0:01:14.244 *********** 2025-06-02 20:15:44.195578 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:44.195585 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:44.195595 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:44.195602 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:44.195609 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:44.195615 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:44.195622 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:44.195628 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.195635 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:44.195641 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:44.195648 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:44.195655 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.195661 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 20:15:44.195668 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.195674 | orchestrator | 2025-06-02 20:15:44.195681 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-02 20:15:44.195687 | orchestrator | Monday 02 June 2025 20:13:57 +0000 (0:00:01.959) 0:01:16.203 *********** 2025-06-02 20:15:44.195694 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:44.195700 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:44.195707 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:44.195714 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:44.195720 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:44.195726 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:44.195732 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:44.195738 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.195744 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:44.195750 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.195756 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-02 20:15:44.195762 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 20:15:44.195769 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.195775 | orchestrator | 2025-06-02 20:15:44.195784 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-02 20:15:44.195791 | orchestrator | Monday 02 June 2025 20:13:59 +0000 (0:00:01.779) 0:01:17.982 *********** 2025-06-02 20:15:44.195797 | orchestrator | [WARNING]: Skipped 2025-06-02 20:15:44.195803 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-02 20:15:44.195809 | orchestrator | due to this access issue: 2025-06-02 20:15:44.195815 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-02 20:15:44.195821 | orchestrator | not a directory 2025-06-02 20:15:44.195827 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 20:15:44.195833 | orchestrator | 2025-06-02 20:15:44.195839 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-02 20:15:44.195849 | orchestrator | Monday 02 June 2025 20:14:00 +0000 (0:00:01.137) 0:01:19.120 *********** 2025-06-02 20:15:44.195855 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:44.195861 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:44.195867 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:44.195873 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:44.195879 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.195885 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.195891 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.195897 | orchestrator | 2025-06-02 20:15:44.195903 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-02 20:15:44.195910 | orchestrator | Monday 02 June 2025 20:14:01 +0000 (0:00:00.998) 0:01:20.118 *********** 2025-06-02 20:15:44.195916 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:44.195922 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:15:44.195928 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:15:44.195934 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:15:44.195940 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:15:44.195946 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:15:44.195952 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:15:44.195958 | orchestrator | 2025-06-02 20:15:44.195964 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-02 20:15:44.195970 | orchestrator | Monday 02 June 2025 20:14:02 +0000 (0:00:00.771) 0:01:20.889 *********** 2025-06-02 20:15:44.195977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.195988 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 20:15:44.195996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.196003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.196014 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.196025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.196032 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.196038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.196048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 20:15:44.196054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.196061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.196067 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.196084 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.196091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.196098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.196104 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.196115 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 20:15:44.196122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.196133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.196143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.196150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.196156 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.196166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.196172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.196179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.196185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 20:15:44.196199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.196206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.196212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 20:15:44.196218 | orchestrator | 2025-06-02 20:15:44.196224 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-02 20:15:44.196231 | orchestrator | Monday 02 June 2025 20:14:06 +0000 (0:00:04.019) 0:01:24.908 *********** 2025-06-02 20:15:44.196237 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 20:15:44.196243 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:15:44.196249 | orchestrator | 2025-06-02 20:15:44.196255 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:44.196261 | orchestrator | Monday 02 June 2025 20:14:07 +0000 (0:00:01.232) 0:01:26.141 *********** 2025-06-02 20:15:44.196267 | orchestrator | 2025-06-02 20:15:44.196273 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:44.196279 | orchestrator | Monday 02 June 2025 20:14:07 +0000 (0:00:00.185) 0:01:26.326 *********** 2025-06-02 20:15:44.196285 | orchestrator | 2025-06-02 20:15:44.196291 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:44.196297 | orchestrator | Monday 02 June 2025 20:14:07 +0000 (0:00:00.058) 0:01:26.385 *********** 2025-06-02 20:15:44.196303 | orchestrator | 2025-06-02 20:15:44.196312 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:44.196318 | orchestrator | Monday 02 June 2025 20:14:07 +0000 (0:00:00.057) 0:01:26.443 *********** 2025-06-02 20:15:44.196324 | orchestrator | 2025-06-02 20:15:44.196330 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:44.196336 | orchestrator | Monday 02 June 2025 20:14:07 +0000 (0:00:00.058) 0:01:26.501 *********** 2025-06-02 20:15:44.196342 | orchestrator | 2025-06-02 20:15:44.196348 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:44.196354 | orchestrator | Monday 02 June 2025 20:14:07 +0000 (0:00:00.056) 0:01:26.558 *********** 2025-06-02 20:15:44.196364 | orchestrator | 2025-06-02 20:15:44.196371 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 20:15:44.196377 | orchestrator | Monday 02 June 2025 20:14:07 +0000 (0:00:00.068) 0:01:26.627 *********** 2025-06-02 20:15:44.196383 | orchestrator | 2025-06-02 20:15:44.196389 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-02 20:15:44.196395 | orchestrator | Monday 02 June 2025 20:14:08 +0000 (0:00:00.093) 0:01:26.721 *********** 2025-06-02 20:15:44.196401 | orchestrator | changed: [testbed-manager] 2025-06-02 20:15:44.196407 | orchestrator | 2025-06-02 20:15:44.196413 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-02 20:15:44.196419 | orchestrator | Monday 02 June 2025 20:14:24 +0000 (0:00:16.777) 0:01:43.499 *********** 2025-06-02 20:15:44.196425 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:44.196431 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:15:44.196437 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:44.196459 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:15:44.196465 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:44.196471 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:15:44.196477 | orchestrator | changed: [testbed-manager] 2025-06-02 20:15:44.196483 | orchestrator | 2025-06-02 20:15:44.196489 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-02 20:15:44.196495 | orchestrator | Monday 02 June 2025 20:14:37 +0000 (0:00:12.779) 0:01:56.278 *********** 2025-06-02 20:15:44.196501 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:44.196507 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:44.196513 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:44.196519 | orchestrator | 2025-06-02 20:15:44.196525 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-02 20:15:44.196531 | orchestrator | Monday 02 June 2025 20:14:42 +0000 (0:00:04.849) 0:02:01.127 *********** 2025-06-02 20:15:44.196538 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:44.196544 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:44.196550 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:44.196556 | orchestrator | 2025-06-02 20:15:44.196562 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-02 20:15:44.196568 | orchestrator | Monday 02 June 2025 20:14:48 +0000 (0:00:06.102) 0:02:07.230 *********** 2025-06-02 20:15:44.196574 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:44.196580 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:44.196586 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:44.196596 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:15:44.196602 | orchestrator | changed: [testbed-manager] 2025-06-02 20:15:44.196608 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:15:44.196614 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:15:44.196620 | orchestrator | 2025-06-02 20:15:44.196626 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-02 20:15:44.196632 | orchestrator | Monday 02 June 2025 20:15:04 +0000 (0:00:16.415) 0:02:23.645 *********** 2025-06-02 20:15:44.196638 | orchestrator | changed: [testbed-manager] 2025-06-02 20:15:44.196644 | orchestrator | 2025-06-02 20:15:44.196650 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-02 20:15:44.196656 | orchestrator | Monday 02 June 2025 20:15:12 +0000 (0:00:07.395) 0:02:31.041 *********** 2025-06-02 20:15:44.196662 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:15:44.196668 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:15:44.196674 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:15:44.196680 | orchestrator | 2025-06-02 20:15:44.196686 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-02 20:15:44.196692 | orchestrator | Monday 02 June 2025 20:15:23 +0000 (0:00:11.155) 0:02:42.197 *********** 2025-06-02 20:15:44.196698 | orchestrator | changed: [testbed-manager] 2025-06-02 20:15:44.196704 | orchestrator | 2025-06-02 20:15:44.196710 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-02 20:15:44.196720 | orchestrator | Monday 02 June 2025 20:15:28 +0000 (0:00:04.520) 0:02:46.717 *********** 2025-06-02 20:15:44.196726 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:15:44.196733 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:15:44.196739 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:15:44.196745 | orchestrator | 2025-06-02 20:15:44.196751 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:15:44.196757 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 20:15:44.196764 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:15:44.196770 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:15:44.196776 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:15:44.196782 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 20:15:44.196791 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 20:15:44.196798 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 20:15:44.196804 | orchestrator | 2025-06-02 20:15:44.196810 | orchestrator | 2025-06-02 20:15:44.196816 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:15:44.196822 | orchestrator | Monday 02 June 2025 20:15:40 +0000 (0:00:12.534) 0:02:59.252 *********** 2025-06-02 20:15:44.196828 | orchestrator | =============================================================================== 2025-06-02 20:15:44.196834 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.99s 2025-06-02 20:15:44.196840 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.78s 2025-06-02 20:15:44.196846 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.42s 2025-06-02 20:15:44.196852 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.55s 2025-06-02 20:15:44.196858 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.78s 2025-06-02 20:15:44.196864 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.53s 2025-06-02 20:15:44.196870 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.16s 2025-06-02 20:15:44.196876 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.40s 2025-06-02 20:15:44.196882 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.86s 2025-06-02 20:15:44.196888 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.10s 2025-06-02 20:15:44.196894 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.65s 2025-06-02 20:15:44.196900 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 4.85s 2025-06-02 20:15:44.196906 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.52s 2025-06-02 20:15:44.196912 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.02s 2025-06-02 20:15:44.196918 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.78s 2025-06-02 20:15:44.196924 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.68s 2025-06-02 20:15:44.196930 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.96s 2025-06-02 20:15:44.196936 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.95s 2025-06-02 20:15:44.196947 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.78s 2025-06-02 20:15:44.196956 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.78s 2025-06-02 20:15:44.196963 | orchestrator | 2025-06-02 20:15:44 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:15:44.196969 | orchestrator | 2025-06-02 20:15:44 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:44.196975 | orchestrator | 2025-06-02 20:15:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:47.219859 | orchestrator | 2025-06-02 20:15:47 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:47.220004 | orchestrator | 2025-06-02 20:15:47 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:47.220675 | orchestrator | 2025-06-02 20:15:47 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:15:47.221023 | orchestrator | 2025-06-02 20:15:47 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:47.221032 | orchestrator | 2025-06-02 20:15:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:50.242919 | orchestrator | 2025-06-02 20:15:50 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:50.243200 | orchestrator | 2025-06-02 20:15:50 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:50.244574 | orchestrator | 2025-06-02 20:15:50 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:15:50.245519 | orchestrator | 2025-06-02 20:15:50 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:50.245554 | orchestrator | 2025-06-02 20:15:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:53.284860 | orchestrator | 2025-06-02 20:15:53 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:53.287625 | orchestrator | 2025-06-02 20:15:53 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:53.289336 | orchestrator | 2025-06-02 20:15:53 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:15:53.291369 | orchestrator | 2025-06-02 20:15:53 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:53.291451 | orchestrator | 2025-06-02 20:15:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:56.335026 | orchestrator | 2025-06-02 20:15:56 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:56.336092 | orchestrator | 2025-06-02 20:15:56 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:56.337635 | orchestrator | 2025-06-02 20:15:56 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:15:56.339053 | orchestrator | 2025-06-02 20:15:56 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:56.339085 | orchestrator | 2025-06-02 20:15:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:15:59.375418 | orchestrator | 2025-06-02 20:15:59 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:15:59.375590 | orchestrator | 2025-06-02 20:15:59 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:15:59.376548 | orchestrator | 2025-06-02 20:15:59 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:15:59.377208 | orchestrator | 2025-06-02 20:15:59 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:15:59.377283 | orchestrator | 2025-06-02 20:15:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:02.428598 | orchestrator | 2025-06-02 20:16:02 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:02.431916 | orchestrator | 2025-06-02 20:16:02 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:02.431966 | orchestrator | 2025-06-02 20:16:02 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:02.432911 | orchestrator | 2025-06-02 20:16:02 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:02.432930 | orchestrator | 2025-06-02 20:16:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:05.481363 | orchestrator | 2025-06-02 20:16:05 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:05.482262 | orchestrator | 2025-06-02 20:16:05 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:05.484173 | orchestrator | 2025-06-02 20:16:05 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:05.485365 | orchestrator | 2025-06-02 20:16:05 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:05.485399 | orchestrator | 2025-06-02 20:16:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:08.525814 | orchestrator | 2025-06-02 20:16:08 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:08.528276 | orchestrator | 2025-06-02 20:16:08 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:08.529969 | orchestrator | 2025-06-02 20:16:08 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:08.531520 | orchestrator | 2025-06-02 20:16:08 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:08.531546 | orchestrator | 2025-06-02 20:16:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:11.575332 | orchestrator | 2025-06-02 20:16:11 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:11.577665 | orchestrator | 2025-06-02 20:16:11 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:11.578822 | orchestrator | 2025-06-02 20:16:11 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:11.580275 | orchestrator | 2025-06-02 20:16:11 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:11.580360 | orchestrator | 2025-06-02 20:16:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:14.624366 | orchestrator | 2025-06-02 20:16:14 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:14.626210 | orchestrator | 2025-06-02 20:16:14 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:14.628747 | orchestrator | 2025-06-02 20:16:14 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:14.630399 | orchestrator | 2025-06-02 20:16:14 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:14.630477 | orchestrator | 2025-06-02 20:16:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:17.673522 | orchestrator | 2025-06-02 20:16:17 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:17.675581 | orchestrator | 2025-06-02 20:16:17 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:17.677734 | orchestrator | 2025-06-02 20:16:17 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:17.679270 | orchestrator | 2025-06-02 20:16:17 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:17.679306 | orchestrator | 2025-06-02 20:16:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:20.723732 | orchestrator | 2025-06-02 20:16:20 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:20.725638 | orchestrator | 2025-06-02 20:16:20 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:20.729008 | orchestrator | 2025-06-02 20:16:20 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:20.730137 | orchestrator | 2025-06-02 20:16:20 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:20.730464 | orchestrator | 2025-06-02 20:16:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:23.771791 | orchestrator | 2025-06-02 20:16:23 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:23.773501 | orchestrator | 2025-06-02 20:16:23 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:23.776036 | orchestrator | 2025-06-02 20:16:23 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:23.777746 | orchestrator | 2025-06-02 20:16:23 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:23.777960 | orchestrator | 2025-06-02 20:16:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:26.831124 | orchestrator | 2025-06-02 20:16:26 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:26.833687 | orchestrator | 2025-06-02 20:16:26 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:26.836561 | orchestrator | 2025-06-02 20:16:26 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:26.838794 | orchestrator | 2025-06-02 20:16:26 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:26.838873 | orchestrator | 2025-06-02 20:16:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:29.884029 | orchestrator | 2025-06-02 20:16:29 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:29.885831 | orchestrator | 2025-06-02 20:16:29 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:29.888927 | orchestrator | 2025-06-02 20:16:29 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:29.890315 | orchestrator | 2025-06-02 20:16:29 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:29.890345 | orchestrator | 2025-06-02 20:16:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:32.922049 | orchestrator | 2025-06-02 20:16:32 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:32.923621 | orchestrator | 2025-06-02 20:16:32 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:32.925008 | orchestrator | 2025-06-02 20:16:32 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:32.926704 | orchestrator | 2025-06-02 20:16:32 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:32.927479 | orchestrator | 2025-06-02 20:16:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:35.975992 | orchestrator | 2025-06-02 20:16:35 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:35.977654 | orchestrator | 2025-06-02 20:16:35 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:35.979566 | orchestrator | 2025-06-02 20:16:35 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:35.981424 | orchestrator | 2025-06-02 20:16:35 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:35.981626 | orchestrator | 2025-06-02 20:16:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:39.029946 | orchestrator | 2025-06-02 20:16:39 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:39.032741 | orchestrator | 2025-06-02 20:16:39 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:39.035249 | orchestrator | 2025-06-02 20:16:39 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:39.036783 | orchestrator | 2025-06-02 20:16:39 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:39.037008 | orchestrator | 2025-06-02 20:16:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:42.077165 | orchestrator | 2025-06-02 20:16:42 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:42.078239 | orchestrator | 2025-06-02 20:16:42 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:42.079934 | orchestrator | 2025-06-02 20:16:42 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:42.081701 | orchestrator | 2025-06-02 20:16:42 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:42.081747 | orchestrator | 2025-06-02 20:16:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:45.132328 | orchestrator | 2025-06-02 20:16:45 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:45.134448 | orchestrator | 2025-06-02 20:16:45 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:45.137513 | orchestrator | 2025-06-02 20:16:45 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:45.138559 | orchestrator | 2025-06-02 20:16:45 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:45.138655 | orchestrator | 2025-06-02 20:16:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:48.172902 | orchestrator | 2025-06-02 20:16:48 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:48.174922 | orchestrator | 2025-06-02 20:16:48 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:48.176933 | orchestrator | 2025-06-02 20:16:48 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:48.179380 | orchestrator | 2025-06-02 20:16:48 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:48.179593 | orchestrator | 2025-06-02 20:16:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:51.224773 | orchestrator | 2025-06-02 20:16:51 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:51.226219 | orchestrator | 2025-06-02 20:16:51 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:51.227355 | orchestrator | 2025-06-02 20:16:51 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:51.228977 | orchestrator | 2025-06-02 20:16:51 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:51.229087 | orchestrator | 2025-06-02 20:16:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:54.267461 | orchestrator | 2025-06-02 20:16:54 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:54.268943 | orchestrator | 2025-06-02 20:16:54 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:54.270716 | orchestrator | 2025-06-02 20:16:54 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:54.272261 | orchestrator | 2025-06-02 20:16:54 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:54.272338 | orchestrator | 2025-06-02 20:16:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:16:57.316916 | orchestrator | 2025-06-02 20:16:57 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:16:57.316988 | orchestrator | 2025-06-02 20:16:57 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:16:57.316993 | orchestrator | 2025-06-02 20:16:57 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:16:57.316998 | orchestrator | 2025-06-02 20:16:57 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:16:57.317003 | orchestrator | 2025-06-02 20:16:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:00.363472 | orchestrator | 2025-06-02 20:17:00 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state STARTED 2025-06-02 20:17:00.364033 | orchestrator | 2025-06-02 20:17:00 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:00.366511 | orchestrator | 2025-06-02 20:17:00 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:00.367332 | orchestrator | 2025-06-02 20:17:00 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:00.367360 | orchestrator | 2025-06-02 20:17:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:03.414530 | orchestrator | 2025-06-02 20:17:03 | INFO  | Task edda95ee-0720-4434-8b2c-a717dac3476c is in state SUCCESS 2025-06-02 20:17:03.415868 | orchestrator | 2025-06-02 20:17:03.415945 | orchestrator | 2025-06-02 20:17:03.415956 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:17:03.415963 | orchestrator | 2025-06-02 20:17:03.415969 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:17:03.415976 | orchestrator | Monday 02 June 2025 20:14:16 +0000 (0:00:00.244) 0:00:00.244 *********** 2025-06-02 20:17:03.415981 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:17:03.415989 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:17:03.415995 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:17:03.416000 | orchestrator | 2025-06-02 20:17:03.416006 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:17:03.416012 | orchestrator | Monday 02 June 2025 20:14:16 +0000 (0:00:00.287) 0:00:00.531 *********** 2025-06-02 20:17:03.416017 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-02 20:17:03.416023 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-02 20:17:03.416029 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-02 20:17:03.416034 | orchestrator | 2025-06-02 20:17:03.416039 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-02 20:17:03.416045 | orchestrator | 2025-06-02 20:17:03.416050 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 20:17:03.416056 | orchestrator | Monday 02 June 2025 20:14:16 +0000 (0:00:00.430) 0:00:00.962 *********** 2025-06-02 20:17:03.416061 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:17:03.416086 | orchestrator | 2025-06-02 20:17:03.416092 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-02 20:17:03.416097 | orchestrator | Monday 02 June 2025 20:14:17 +0000 (0:00:00.603) 0:00:01.565 *********** 2025-06-02 20:17:03.416103 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-02 20:17:03.416108 | orchestrator | 2025-06-02 20:17:03.416114 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-02 20:17:03.416119 | orchestrator | Monday 02 June 2025 20:14:21 +0000 (0:00:03.652) 0:00:05.217 *********** 2025-06-02 20:17:03.416124 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-02 20:17:03.416130 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-02 20:17:03.416136 | orchestrator | 2025-06-02 20:17:03.416141 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-02 20:17:03.416147 | orchestrator | Monday 02 June 2025 20:14:27 +0000 (0:00:06.520) 0:00:11.737 *********** 2025-06-02 20:17:03.416152 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:17:03.416158 | orchestrator | 2025-06-02 20:17:03.416164 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-02 20:17:03.416169 | orchestrator | Monday 02 June 2025 20:14:31 +0000 (0:00:03.284) 0:00:15.022 *********** 2025-06-02 20:17:03.416175 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:17:03.416180 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-02 20:17:03.416186 | orchestrator | 2025-06-02 20:17:03.416191 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-02 20:17:03.416197 | orchestrator | Monday 02 June 2025 20:14:34 +0000 (0:00:03.851) 0:00:18.873 *********** 2025-06-02 20:17:03.416202 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:17:03.416208 | orchestrator | 2025-06-02 20:17:03.416213 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-02 20:17:03.416219 | orchestrator | Monday 02 June 2025 20:14:38 +0000 (0:00:03.410) 0:00:22.283 *********** 2025-06-02 20:17:03.416224 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-02 20:17:03.416230 | orchestrator | 2025-06-02 20:17:03.416235 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-02 20:17:03.416241 | orchestrator | Monday 02 June 2025 20:14:42 +0000 (0:00:04.194) 0:00:26.478 *********** 2025-06-02 20:17:03.416274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.416289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.416299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.416306 | orchestrator | 2025-06-02 20:17:03.416311 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 20:17:03.416317 | orchestrator | Monday 02 June 2025 20:14:47 +0000 (0:00:04.596) 0:00:31.075 *********** 2025-06-02 20:17:03.416326 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:17:03.416337 | orchestrator | 2025-06-02 20:17:03.416342 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-02 20:17:03.416348 | orchestrator | Monday 02 June 2025 20:14:47 +0000 (0:00:00.686) 0:00:31.761 *********** 2025-06-02 20:17:03.416353 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:03.416359 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:17:03.416364 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:17:03.416369 | orchestrator | 2025-06-02 20:17:03.416375 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-02 20:17:03.416400 | orchestrator | Monday 02 June 2025 20:14:53 +0000 (0:00:06.067) 0:00:37.829 *********** 2025-06-02 20:17:03.416406 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:17:03.416411 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:17:03.416417 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:17:03.416422 | orchestrator | 2025-06-02 20:17:03.416428 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-02 20:17:03.416433 | orchestrator | Monday 02 June 2025 20:14:55 +0000 (0:00:01.469) 0:00:39.298 *********** 2025-06-02 20:17:03.416438 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:17:03.416444 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:17:03.416449 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:17:03.416455 | orchestrator | 2025-06-02 20:17:03.416461 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-02 20:17:03.416468 | orchestrator | Monday 02 June 2025 20:14:56 +0000 (0:00:01.142) 0:00:40.440 *********** 2025-06-02 20:17:03.416474 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:17:03.416480 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:17:03.416487 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:17:03.416493 | orchestrator | 2025-06-02 20:17:03.416499 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-02 20:17:03.416506 | orchestrator | Monday 02 June 2025 20:14:57 +0000 (0:00:00.925) 0:00:41.366 *********** 2025-06-02 20:17:03.416512 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.416519 | orchestrator | 2025-06-02 20:17:03.416525 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-02 20:17:03.416531 | orchestrator | Monday 02 June 2025 20:14:57 +0000 (0:00:00.165) 0:00:41.531 *********** 2025-06-02 20:17:03.416537 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.416544 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:03.416550 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:03.416556 | orchestrator | 2025-06-02 20:17:03.416562 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 20:17:03.416569 | orchestrator | Monday 02 June 2025 20:14:57 +0000 (0:00:00.306) 0:00:41.837 *********** 2025-06-02 20:17:03.416575 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:17:03.416583 | orchestrator | 2025-06-02 20:17:03.416589 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-02 20:17:03.416596 | orchestrator | Monday 02 June 2025 20:14:58 +0000 (0:00:00.579) 0:00:42.416 *********** 2025-06-02 20:17:03.416609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.416622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.416633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.416644 | orchestrator | 2025-06-02 20:17:03.416652 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-02 20:17:03.416658 | orchestrator | Monday 02 June 2025 20:15:02 +0000 (0:00:03.711) 0:00:46.128 *********** 2025-06-02 20:17:03.416670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:17:03.416678 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:03.416685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:17:03.416698 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.416714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:17:03.416721 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:03.416727 | orchestrator | 2025-06-02 20:17:03.416734 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-02 20:17:03.416740 | orchestrator | Monday 02 June 2025 20:15:05 +0000 (0:00:03.535) 0:00:49.664 *********** 2025-06-02 20:17:03.416747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:17:03.416758 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:03.416771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:17:03.416778 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.416785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 20:17:03.416792 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:03.416799 | orchestrator | 2025-06-02 20:17:03.416805 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-02 20:17:03.416816 | orchestrator | Monday 02 June 2025 20:15:10 +0000 (0:00:04.397) 0:00:54.061 *********** 2025-06-02 20:17:03.416822 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:03.416828 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:03.416834 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.416841 | orchestrator | 2025-06-02 20:17:03.416847 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-02 20:17:03.416853 | orchestrator | Monday 02 June 2025 20:15:15 +0000 (0:00:05.490) 0:00:59.551 *********** 2025-06-02 20:17:03.416872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.416879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.416892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.416898 | orchestrator | 2025-06-02 20:17:03.416903 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-02 20:17:03.416909 | orchestrator | Monday 02 June 2025 20:15:19 +0000 (0:00:04.107) 0:01:03.659 *********** 2025-06-02 20:17:03.416914 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:17:03.416920 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:03.416925 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:17:03.416930 | orchestrator | 2025-06-02 20:17:03.416936 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-02 20:17:03.417055 | orchestrator | Monday 02 June 2025 20:15:24 +0000 (0:00:05.029) 0:01:08.688 *********** 2025-06-02 20:17:03.417064 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.417069 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:03.417075 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:03.417080 | orchestrator | 2025-06-02 20:17:03.417085 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-02 20:17:03.417091 | orchestrator | Monday 02 June 2025 20:15:30 +0000 (0:00:05.437) 0:01:14.126 *********** 2025-06-02 20:17:03.417096 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.417102 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:03.417107 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:03.417138 | orchestrator | 2025-06-02 20:17:03.417144 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-02 20:17:03.417150 | orchestrator | Monday 02 June 2025 20:15:34 +0000 (0:00:04.195) 0:01:18.322 *********** 2025-06-02 20:17:03.417155 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.417161 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:03.417166 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:03.417172 | orchestrator | 2025-06-02 20:17:03.417177 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-02 20:17:03.417182 | orchestrator | Monday 02 June 2025 20:15:38 +0000 (0:00:04.020) 0:01:22.343 *********** 2025-06-02 20:17:03.417188 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:03.417193 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.417198 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:03.417204 | orchestrator | 2025-06-02 20:17:03.417214 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-02 20:17:03.417220 | orchestrator | Monday 02 June 2025 20:15:44 +0000 (0:00:05.681) 0:01:28.024 *********** 2025-06-02 20:17:03.417225 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.417230 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:03.417236 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:03.417241 | orchestrator | 2025-06-02 20:17:03.417246 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-02 20:17:03.417252 | orchestrator | Monday 02 June 2025 20:15:44 +0000 (0:00:00.534) 0:01:28.559 *********** 2025-06-02 20:17:03.417257 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 20:17:03.417262 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.417268 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 20:17:03.417273 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:03.417279 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 20:17:03.417284 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:03.417289 | orchestrator | 2025-06-02 20:17:03.417295 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-02 20:17:03.417300 | orchestrator | Monday 02 June 2025 20:15:47 +0000 (0:00:03.431) 0:01:31.990 *********** 2025-06-02 20:17:03.417311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.417324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.417335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 20:17:03.417341 | orchestrator | 2025-06-02 20:17:03.417347 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 20:17:03.417352 | orchestrator | Monday 02 June 2025 20:15:51 +0000 (0:00:03.402) 0:01:35.392 *********** 2025-06-02 20:17:03.417358 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:03.417363 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:03.417369 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:03.417374 | orchestrator | 2025-06-02 20:17:03.417423 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-02 20:17:03.417433 | orchestrator | Monday 02 June 2025 20:15:51 +0000 (0:00:00.245) 0:01:35.638 *********** 2025-06-02 20:17:03.417439 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:03.417444 | orchestrator | 2025-06-02 20:17:03.417450 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-02 20:17:03.417455 | orchestrator | Monday 02 June 2025 20:15:53 +0000 (0:00:02.104) 0:01:37.743 *********** 2025-06-02 20:17:03.417461 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:03.417466 | orchestrator | 2025-06-02 20:17:03.417472 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-02 20:17:03.417477 | orchestrator | Monday 02 June 2025 20:15:56 +0000 (0:00:02.259) 0:01:40.002 *********** 2025-06-02 20:17:03.417483 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:03.417496 | orchestrator | 2025-06-02 20:17:03.417502 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-02 20:17:03.417510 | orchestrator | Monday 02 June 2025 20:15:58 +0000 (0:00:02.288) 0:01:42.291 *********** 2025-06-02 20:17:03.417516 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:03.417521 | orchestrator | 2025-06-02 20:17:03.417527 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-02 20:17:03.417532 | orchestrator | Monday 02 June 2025 20:16:27 +0000 (0:00:29.027) 0:02:11.318 *********** 2025-06-02 20:17:03.417538 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:03.417543 | orchestrator | 2025-06-02 20:17:03.417549 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 20:17:03.417554 | orchestrator | Monday 02 June 2025 20:16:29 +0000 (0:00:02.457) 0:02:13.776 *********** 2025-06-02 20:17:03.417560 | orchestrator | 2025-06-02 20:17:03.417565 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 20:17:03.417570 | orchestrator | Monday 02 June 2025 20:16:29 +0000 (0:00:00.062) 0:02:13.838 *********** 2025-06-02 20:17:03.417576 | orchestrator | 2025-06-02 20:17:03.417581 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 20:17:03.417587 | orchestrator | Monday 02 June 2025 20:16:29 +0000 (0:00:00.062) 0:02:13.901 *********** 2025-06-02 20:17:03.417592 | orchestrator | 2025-06-02 20:17:03.417597 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-02 20:17:03.417603 | orchestrator | Monday 02 June 2025 20:16:29 +0000 (0:00:00.062) 0:02:13.963 *********** 2025-06-02 20:17:03.417608 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:03.417614 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:17:03.417619 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:17:03.417625 | orchestrator | 2025-06-02 20:17:03.417630 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:17:03.417637 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 20:17:03.417645 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:17:03.417651 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:17:03.417657 | orchestrator | 2025-06-02 20:17:03.417663 | orchestrator | 2025-06-02 20:17:03.417668 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:17:03.417674 | orchestrator | Monday 02 June 2025 20:17:00 +0000 (0:00:30.518) 0:02:44.482 *********** 2025-06-02 20:17:03.417679 | orchestrator | =============================================================================== 2025-06-02 20:17:03.417685 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.52s 2025-06-02 20:17:03.417692 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.03s 2025-06-02 20:17:03.417698 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.52s 2025-06-02 20:17:03.417706 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.07s 2025-06-02 20:17:03.417712 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.68s 2025-06-02 20:17:03.417718 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.49s 2025-06-02 20:17:03.417724 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.44s 2025-06-02 20:17:03.417730 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.03s 2025-06-02 20:17:03.417736 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.60s 2025-06-02 20:17:03.417743 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.40s 2025-06-02 20:17:03.417749 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.20s 2025-06-02 20:17:03.417760 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.19s 2025-06-02 20:17:03.417766 | orchestrator | glance : Copying over config.json files for services -------------------- 4.11s 2025-06-02 20:17:03.417773 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.02s 2025-06-02 20:17:03.417779 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.85s 2025-06-02 20:17:03.417786 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.71s 2025-06-02 20:17:03.417792 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.65s 2025-06-02 20:17:03.417798 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.54s 2025-06-02 20:17:03.417804 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.43s 2025-06-02 20:17:03.417809 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.41s 2025-06-02 20:17:03.417814 | orchestrator | 2025-06-02 20:17:03 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:03.417820 | orchestrator | 2025-06-02 20:17:03 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:03.417869 | orchestrator | 2025-06-02 20:17:03 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:03.418418 | orchestrator | 2025-06-02 20:17:03 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:03.418450 | orchestrator | 2025-06-02 20:17:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:06.449354 | orchestrator | 2025-06-02 20:17:06 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:06.450973 | orchestrator | 2025-06-02 20:17:06 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:06.452502 | orchestrator | 2025-06-02 20:17:06 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:06.454160 | orchestrator | 2025-06-02 20:17:06 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:06.454232 | orchestrator | 2025-06-02 20:17:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:09.495035 | orchestrator | 2025-06-02 20:17:09 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:09.497105 | orchestrator | 2025-06-02 20:17:09 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:09.498931 | orchestrator | 2025-06-02 20:17:09 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:09.500267 | orchestrator | 2025-06-02 20:17:09 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:09.500460 | orchestrator | 2025-06-02 20:17:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:12.541782 | orchestrator | 2025-06-02 20:17:12 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:12.543759 | orchestrator | 2025-06-02 20:17:12 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:12.545648 | orchestrator | 2025-06-02 20:17:12 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:12.547129 | orchestrator | 2025-06-02 20:17:12 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:12.547158 | orchestrator | 2025-06-02 20:17:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:15.578904 | orchestrator | 2025-06-02 20:17:15 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:15.580351 | orchestrator | 2025-06-02 20:17:15 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:15.582209 | orchestrator | 2025-06-02 20:17:15 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:15.583945 | orchestrator | 2025-06-02 20:17:15 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:15.583978 | orchestrator | 2025-06-02 20:17:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:18.626140 | orchestrator | 2025-06-02 20:17:18 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:18.627928 | orchestrator | 2025-06-02 20:17:18 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:18.629560 | orchestrator | 2025-06-02 20:17:18 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:18.631263 | orchestrator | 2025-06-02 20:17:18 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:18.631352 | orchestrator | 2025-06-02 20:17:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:21.680278 | orchestrator | 2025-06-02 20:17:21 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:21.681590 | orchestrator | 2025-06-02 20:17:21 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:21.683921 | orchestrator | 2025-06-02 20:17:21 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:21.686765 | orchestrator | 2025-06-02 20:17:21 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:21.686820 | orchestrator | 2025-06-02 20:17:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:24.727109 | orchestrator | 2025-06-02 20:17:24 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:24.728315 | orchestrator | 2025-06-02 20:17:24 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:24.730052 | orchestrator | 2025-06-02 20:17:24 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:24.731206 | orchestrator | 2025-06-02 20:17:24 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:24.731233 | orchestrator | 2025-06-02 20:17:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:27.771141 | orchestrator | 2025-06-02 20:17:27 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:27.771744 | orchestrator | 2025-06-02 20:17:27 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:27.772417 | orchestrator | 2025-06-02 20:17:27 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:27.773286 | orchestrator | 2025-06-02 20:17:27 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:27.773333 | orchestrator | 2025-06-02 20:17:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:30.818969 | orchestrator | 2025-06-02 20:17:30 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:30.821130 | orchestrator | 2025-06-02 20:17:30 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:30.822743 | orchestrator | 2025-06-02 20:17:30 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:30.823919 | orchestrator | 2025-06-02 20:17:30 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:30.823957 | orchestrator | 2025-06-02 20:17:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:33.869184 | orchestrator | 2025-06-02 20:17:33 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:33.873580 | orchestrator | 2025-06-02 20:17:33 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:33.875306 | orchestrator | 2025-06-02 20:17:33 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:33.876995 | orchestrator | 2025-06-02 20:17:33 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:33.877074 | orchestrator | 2025-06-02 20:17:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:36.920526 | orchestrator | 2025-06-02 20:17:36 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:36.920622 | orchestrator | 2025-06-02 20:17:36 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:36.925332 | orchestrator | 2025-06-02 20:17:36 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:36.926003 | orchestrator | 2025-06-02 20:17:36 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:36.926197 | orchestrator | 2025-06-02 20:17:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:39.970931 | orchestrator | 2025-06-02 20:17:39 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:39.971688 | orchestrator | 2025-06-02 20:17:39 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:39.975103 | orchestrator | 2025-06-02 20:17:39 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:39.976851 | orchestrator | 2025-06-02 20:17:39 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state STARTED 2025-06-02 20:17:39.977275 | orchestrator | 2025-06-02 20:17:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:43.023920 | orchestrator | 2025-06-02 20:17:43 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:43.025089 | orchestrator | 2025-06-02 20:17:43 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:43.029021 | orchestrator | 2025-06-02 20:17:43 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:43.029884 | orchestrator | 2025-06-02 20:17:43 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:17:43.032689 | orchestrator | 2025-06-02 20:17:43 | INFO  | Task 0ec5d894-04ad-4963-adc9-e07e3edb11e6 is in state SUCCESS 2025-06-02 20:17:43.033983 | orchestrator | 2025-06-02 20:17:43.034049 | orchestrator | 2025-06-02 20:17:43.034059 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:17:43.034067 | orchestrator | 2025-06-02 20:17:43.034074 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:17:43.034097 | orchestrator | Monday 02 June 2025 20:14:32 +0000 (0:00:00.229) 0:00:00.229 *********** 2025-06-02 20:17:43.034105 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:17:43.034165 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:17:43.034173 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:17:43.034181 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:17:43.034188 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:17:43.034195 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:17:43.034203 | orchestrator | 2025-06-02 20:17:43.034210 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:17:43.034218 | orchestrator | Monday 02 June 2025 20:14:33 +0000 (0:00:00.606) 0:00:00.835 *********** 2025-06-02 20:17:43.034225 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-02 20:17:43.034233 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-02 20:17:43.034307 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-02 20:17:43.034315 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-02 20:17:43.034322 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-02 20:17:43.034331 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-02 20:17:43.034338 | orchestrator | 2025-06-02 20:17:43.034345 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-02 20:17:43.034403 | orchestrator | 2025-06-02 20:17:43.034409 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 20:17:43.034416 | orchestrator | Monday 02 June 2025 20:14:33 +0000 (0:00:00.528) 0:00:01.364 *********** 2025-06-02 20:17:43.034422 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:17:43.034431 | orchestrator | 2025-06-02 20:17:43.034437 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-02 20:17:43.034444 | orchestrator | Monday 02 June 2025 20:14:34 +0000 (0:00:00.986) 0:00:02.350 *********** 2025-06-02 20:17:43.034451 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-02 20:17:43.034457 | orchestrator | 2025-06-02 20:17:43.034464 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-02 20:17:43.034471 | orchestrator | Monday 02 June 2025 20:14:37 +0000 (0:00:03.350) 0:00:05.700 *********** 2025-06-02 20:17:43.034478 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-02 20:17:43.034485 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-02 20:17:43.034492 | orchestrator | 2025-06-02 20:17:43.034499 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-02 20:17:43.034505 | orchestrator | Monday 02 June 2025 20:14:44 +0000 (0:00:07.037) 0:00:12.738 *********** 2025-06-02 20:17:43.034568 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:17:43.034576 | orchestrator | 2025-06-02 20:17:43.034590 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-02 20:17:43.034596 | orchestrator | Monday 02 June 2025 20:14:48 +0000 (0:00:03.502) 0:00:16.240 *********** 2025-06-02 20:17:43.034602 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:17:43.034609 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-02 20:17:43.034620 | orchestrator | 2025-06-02 20:17:43.034627 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-02 20:17:43.034633 | orchestrator | Monday 02 June 2025 20:14:52 +0000 (0:00:04.063) 0:00:20.304 *********** 2025-06-02 20:17:43.034645 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:17:43.034669 | orchestrator | 2025-06-02 20:17:43.034681 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-02 20:17:43.034693 | orchestrator | Monday 02 June 2025 20:14:55 +0000 (0:00:03.320) 0:00:23.624 *********** 2025-06-02 20:17:43.034701 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-02 20:17:43.034707 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-02 20:17:43.034713 | orchestrator | 2025-06-02 20:17:43.034719 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-02 20:17:43.034725 | orchestrator | Monday 02 June 2025 20:15:03 +0000 (0:00:07.957) 0:00:31.582 *********** 2025-06-02 20:17:43.034750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.034784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.034796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.034808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.034818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.034826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.034849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.034857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.034866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.034873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.034883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.034908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.034921 | orchestrator | 2025-06-02 20:17:43.034933 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 20:17:43.034945 | orchestrator | Monday 02 June 2025 20:15:06 +0000 (0:00:02.808) 0:00:34.390 *********** 2025-06-02 20:17:43.034958 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:43.034970 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:43.034977 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:43.034984 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:17:43.034991 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:17:43.034997 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:17:43.035004 | orchestrator | 2025-06-02 20:17:43.035010 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 20:17:43.035016 | orchestrator | Monday 02 June 2025 20:15:07 +0000 (0:00:00.893) 0:00:35.284 *********** 2025-06-02 20:17:43.035023 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:43.035029 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:43.035035 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:43.035042 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:17:43.035048 | orchestrator | 2025-06-02 20:17:43.035054 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-02 20:17:43.035060 | orchestrator | Monday 02 June 2025 20:15:08 +0000 (0:00:01.498) 0:00:36.782 *********** 2025-06-02 20:17:43.035067 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-02 20:17:43.035073 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-02 20:17:43.035079 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-02 20:17:43.035085 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-02 20:17:43.035091 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-02 20:17:43.035097 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-02 20:17:43.035104 | orchestrator | 2025-06-02 20:17:43.035110 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-02 20:17:43.035116 | orchestrator | Monday 02 June 2025 20:15:11 +0000 (0:00:02.409) 0:00:39.191 *********** 2025-06-02 20:17:43.035122 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:17:43.035134 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:17:43.035145 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:17:43.035155 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:17:43.035161 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:17:43.035168 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 20:17:43.035180 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:17:43.035202 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:17:43.035208 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:17:43.035215 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:17:43.035223 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:17:43.035235 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 20:17:43.035241 | orchestrator | 2025-06-02 20:17:43.035248 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-02 20:17:43.035254 | orchestrator | Monday 02 June 2025 20:15:15 +0000 (0:00:04.588) 0:00:43.779 *********** 2025-06-02 20:17:43.035260 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:17:43.035268 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:17:43.035274 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 20:17:43.035280 | orchestrator | 2025-06-02 20:17:43.035287 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-02 20:17:43.035293 | orchestrator | Monday 02 June 2025 20:15:18 +0000 (0:00:02.048) 0:00:45.828 *********** 2025-06-02 20:17:43.035303 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-02 20:17:43.035310 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-02 20:17:43.035319 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-02 20:17:43.035326 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:17:43.035333 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:17:43.035339 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 20:17:43.035346 | orchestrator | 2025-06-02 20:17:43.035405 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-02 20:17:43.035413 | orchestrator | Monday 02 June 2025 20:15:20 +0000 (0:00:02.536) 0:00:48.364 *********** 2025-06-02 20:17:43.035419 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-02 20:17:43.035426 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-02 20:17:43.035433 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-02 20:17:43.035439 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-02 20:17:43.035446 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-02 20:17:43.035452 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-02 20:17:43.035459 | orchestrator | 2025-06-02 20:17:43.035465 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-02 20:17:43.035472 | orchestrator | Monday 02 June 2025 20:15:21 +0000 (0:00:00.957) 0:00:49.322 *********** 2025-06-02 20:17:43.035478 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:43.035485 | orchestrator | 2025-06-02 20:17:43.035491 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-02 20:17:43.035503 | orchestrator | Monday 02 June 2025 20:15:21 +0000 (0:00:00.113) 0:00:49.435 *********** 2025-06-02 20:17:43.035510 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:43.035517 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:43.035523 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:43.035530 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:17:43.035536 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:17:43.035543 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:17:43.035549 | orchestrator | 2025-06-02 20:17:43.035556 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 20:17:43.035562 | orchestrator | Monday 02 June 2025 20:15:22 +0000 (0:00:00.637) 0:00:50.073 *********** 2025-06-02 20:17:43.035570 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:17:43.035579 | orchestrator | 2025-06-02 20:17:43.035586 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-02 20:17:43.035592 | orchestrator | Monday 02 June 2025 20:15:23 +0000 (0:00:01.121) 0:00:51.194 *********** 2025-06-02 20:17:43.035599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.035607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.035626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.035634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.035645 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.035652 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.035659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.035998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036034 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036041 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036052 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036060 | orchestrator | 2025-06-02 20:17:43.036067 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-02 20:17:43.036074 | orchestrator | Monday 02 June 2025 20:15:26 +0000 (0:00:02.909) 0:00:54.103 *********** 2025-06-02 20:17:43.036088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:17:43.036099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:17:43.036117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:17:43.036131 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:43.036138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036145 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:43.036151 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:43.036165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036182 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:17:43.036189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036202 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:17:43.036209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036235 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:17:43.036241 | orchestrator | 2025-06-02 20:17:43.036248 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-02 20:17:43.036254 | orchestrator | Monday 02 June 2025 20:15:28 +0000 (0:00:02.329) 0:00:56.433 *********** 2025-06-02 20:17:43.036261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:17:43.036268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036274 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:43.036281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:17:43.036287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036298 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:43.036314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036328 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:17:43.036334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:17:43.036341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036348 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:43.036382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036421 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:17:43.036428 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:17:43.036434 | orchestrator | 2025-06-02 20:17:43.036440 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-02 20:17:43.036446 | orchestrator | Monday 02 June 2025 20:15:31 +0000 (0:00:02.802) 0:00:59.235 *********** 2025-06-02 20:17:43.036454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.036461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.036480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.036487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036494 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036555 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036567 | orchestrator | 2025-06-02 20:17:43.036573 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-02 20:17:43.036581 | orchestrator | Monday 02 June 2025 20:15:34 +0000 (0:00:03.095) 0:01:02.331 *********** 2025-06-02 20:17:43.036588 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 20:17:43.036595 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:17:43.036603 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 20:17:43.036610 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:17:43.036617 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 20:17:43.036625 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 20:17:43.036632 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:17:43.036639 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 20:17:43.036650 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 20:17:43.036657 | orchestrator | 2025-06-02 20:17:43.036664 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-02 20:17:43.036674 | orchestrator | Monday 02 June 2025 20:15:37 +0000 (0:00:02.622) 0:01:04.954 *********** 2025-06-02 20:17:43.036683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.036690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.036698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.036710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036724 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036781 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.036797 | orchestrator | 2025-06-02 20:17:43.036804 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-02 20:17:43.036812 | orchestrator | Monday 02 June 2025 20:15:46 +0000 (0:00:09.175) 0:01:14.129 *********** 2025-06-02 20:17:43.036819 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:43.036826 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:43.036833 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:43.036840 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:17:43.036847 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:17:43.036854 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:17:43.036861 | orchestrator | 2025-06-02 20:17:43.036868 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-02 20:17:43.036875 | orchestrator | Monday 02 June 2025 20:15:48 +0000 (0:00:02.638) 0:01:16.768 *********** 2025-06-02 20:17:43.036888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:17:43.036895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:17:43.036921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036929 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:43.036938 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:43.036951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 20:17:43.036973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036983 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:43.036989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.036995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.037001 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:17:43.037016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.037023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.037036 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:17:43.037042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.037049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 20:17:43.037055 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:17:43.037060 | orchestrator | 2025-06-02 20:17:43.037066 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-02 20:17:43.037072 | orchestrator | Monday 02 June 2025 20:15:50 +0000 (0:00:01.089) 0:01:17.858 *********** 2025-06-02 20:17:43.037078 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:43.037084 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:43.037089 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:43.037095 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:17:43.037101 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:17:43.037107 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:17:43.037113 | orchestrator | 2025-06-02 20:17:43.037119 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-02 20:17:43.037126 | orchestrator | Monday 02 June 2025 20:15:50 +0000 (0:00:00.849) 0:01:18.707 *********** 2025-06-02 20:17:43.037143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.037149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.037162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 20:17:43.037169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.037175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.037189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.037202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.037208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.037215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.037228 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.037238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.037254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 20:17:43.037267 | orchestrator | 2025-06-02 20:17:43.037274 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 20:17:43.037280 | orchestrator | Monday 02 June 2025 20:15:52 +0000 (0:00:02.046) 0:01:20.753 *********** 2025-06-02 20:17:43.037287 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:43.037294 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:17:43.037300 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:17:43.037307 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:17:43.037313 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:17:43.037319 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:17:43.037326 | orchestrator | 2025-06-02 20:17:43.037332 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-02 20:17:43.037338 | orchestrator | Monday 02 June 2025 20:15:53 +0000 (0:00:00.595) 0:01:21.349 *********** 2025-06-02 20:17:43.037344 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:43.037396 | orchestrator | 2025-06-02 20:17:43.037404 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-02 20:17:43.037410 | orchestrator | Monday 02 June 2025 20:15:55 +0000 (0:00:02.226) 0:01:23.576 *********** 2025-06-02 20:17:43.037416 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:43.037422 | orchestrator | 2025-06-02 20:17:43.037429 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-02 20:17:43.037435 | orchestrator | Monday 02 June 2025 20:15:58 +0000 (0:00:02.267) 0:01:25.843 *********** 2025-06-02 20:17:43.037441 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:43.037448 | orchestrator | 2025-06-02 20:17:43.037454 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:17:43.037460 | orchestrator | Monday 02 June 2025 20:16:20 +0000 (0:00:22.011) 0:01:47.855 *********** 2025-06-02 20:17:43.037466 | orchestrator | 2025-06-02 20:17:43.037472 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:17:43.037478 | orchestrator | Monday 02 June 2025 20:16:20 +0000 (0:00:00.065) 0:01:47.921 *********** 2025-06-02 20:17:43.037485 | orchestrator | 2025-06-02 20:17:43.037491 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:17:43.037497 | orchestrator | Monday 02 June 2025 20:16:20 +0000 (0:00:00.064) 0:01:47.985 *********** 2025-06-02 20:17:43.037503 | orchestrator | 2025-06-02 20:17:43.037509 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:17:43.037515 | orchestrator | Monday 02 June 2025 20:16:20 +0000 (0:00:00.064) 0:01:48.050 *********** 2025-06-02 20:17:43.037521 | orchestrator | 2025-06-02 20:17:43.037528 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:17:43.037534 | orchestrator | Monday 02 June 2025 20:16:20 +0000 (0:00:00.064) 0:01:48.115 *********** 2025-06-02 20:17:43.037541 | orchestrator | 2025-06-02 20:17:43.037547 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 20:17:43.037553 | orchestrator | Monday 02 June 2025 20:16:20 +0000 (0:00:00.065) 0:01:48.181 *********** 2025-06-02 20:17:43.037559 | orchestrator | 2025-06-02 20:17:43.037566 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-02 20:17:43.037572 | orchestrator | Monday 02 June 2025 20:16:20 +0000 (0:00:00.062) 0:01:48.243 *********** 2025-06-02 20:17:43.037579 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:43.037585 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:17:43.037591 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:17:43.037598 | orchestrator | 2025-06-02 20:17:43.037604 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-02 20:17:43.037610 | orchestrator | Monday 02 June 2025 20:16:45 +0000 (0:00:24.954) 0:02:13.198 *********** 2025-06-02 20:17:43.037617 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:17:43.037623 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:17:43.037630 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:17:43.037641 | orchestrator | 2025-06-02 20:17:43.037647 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-02 20:17:43.037653 | orchestrator | Monday 02 June 2025 20:16:55 +0000 (0:00:10.438) 0:02:23.636 *********** 2025-06-02 20:17:43.037660 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:17:43.037666 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:17:43.037673 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:17:43.037679 | orchestrator | 2025-06-02 20:17:43.037686 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-02 20:17:43.037693 | orchestrator | Monday 02 June 2025 20:17:35 +0000 (0:00:40.036) 0:03:03.673 *********** 2025-06-02 20:17:43.037698 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:17:43.037705 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:17:43.037710 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:17:43.037716 | orchestrator | 2025-06-02 20:17:43.037722 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-02 20:17:43.037729 | orchestrator | Monday 02 June 2025 20:17:41 +0000 (0:00:05.289) 0:03:08.962 *********** 2025-06-02 20:17:43.037736 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:17:43.037742 | orchestrator | 2025-06-02 20:17:43.037749 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:17:43.037802 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 20:17:43.037816 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 20:17:43.037823 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 20:17:43.037830 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 20:17:43.037837 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 20:17:43.037844 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 20:17:43.037850 | orchestrator | 2025-06-02 20:17:43.037857 | orchestrator | 2025-06-02 20:17:43.037864 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:17:43.037870 | orchestrator | Monday 02 June 2025 20:17:41 +0000 (0:00:00.504) 0:03:09.466 *********** 2025-06-02 20:17:43.037876 | orchestrator | =============================================================================== 2025-06-02 20:17:43.037883 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 40.04s 2025-06-02 20:17:43.037889 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.95s 2025-06-02 20:17:43.037896 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 22.01s 2025-06-02 20:17:43.037901 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.44s 2025-06-02 20:17:43.037908 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.18s 2025-06-02 20:17:43.037914 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.96s 2025-06-02 20:17:43.037921 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.04s 2025-06-02 20:17:43.037926 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.29s 2025-06-02 20:17:43.037934 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.59s 2025-06-02 20:17:43.037940 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.06s 2025-06-02 20:17:43.037946 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.50s 2025-06-02 20:17:43.037958 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.35s 2025-06-02 20:17:43.037965 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.32s 2025-06-02 20:17:43.037971 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.10s 2025-06-02 20:17:43.037977 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.91s 2025-06-02 20:17:43.037984 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.81s 2025-06-02 20:17:43.037991 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.80s 2025-06-02 20:17:43.037997 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.64s 2025-06-02 20:17:43.038004 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.62s 2025-06-02 20:17:43.038011 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.54s 2025-06-02 20:17:43.038046 | orchestrator | 2025-06-02 20:17:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:46.083991 | orchestrator | 2025-06-02 20:17:46 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:46.087010 | orchestrator | 2025-06-02 20:17:46 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:46.089028 | orchestrator | 2025-06-02 20:17:46 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:46.092004 | orchestrator | 2025-06-02 20:17:46 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:17:46.092055 | orchestrator | 2025-06-02 20:17:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:49.130735 | orchestrator | 2025-06-02 20:17:49 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:49.132032 | orchestrator | 2025-06-02 20:17:49 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:49.133544 | orchestrator | 2025-06-02 20:17:49 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:49.135802 | orchestrator | 2025-06-02 20:17:49 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:17:49.135945 | orchestrator | 2025-06-02 20:17:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:52.170201 | orchestrator | 2025-06-02 20:17:52 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:52.171017 | orchestrator | 2025-06-02 20:17:52 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:52.171644 | orchestrator | 2025-06-02 20:17:52 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:52.172708 | orchestrator | 2025-06-02 20:17:52 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:17:52.172743 | orchestrator | 2025-06-02 20:17:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:55.220788 | orchestrator | 2025-06-02 20:17:55 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:55.220888 | orchestrator | 2025-06-02 20:17:55 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:55.221524 | orchestrator | 2025-06-02 20:17:55 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:55.222777 | orchestrator | 2025-06-02 20:17:55 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:17:55.222837 | orchestrator | 2025-06-02 20:17:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:17:58.273137 | orchestrator | 2025-06-02 20:17:58 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:17:58.274938 | orchestrator | 2025-06-02 20:17:58 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state STARTED 2025-06-02 20:17:58.276675 | orchestrator | 2025-06-02 20:17:58 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:17:58.278662 | orchestrator | 2025-06-02 20:17:58 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:17:58.279188 | orchestrator | 2025-06-02 20:17:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:01.328791 | orchestrator | 2025-06-02 20:18:01 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:01.329675 | orchestrator | 2025-06-02 20:18:01.329715 | orchestrator | 2025-06-02 20:18:01.329724 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:18:01.329732 | orchestrator | 2025-06-02 20:18:01.329740 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:18:01.329748 | orchestrator | Monday 02 June 2025 20:17:04 +0000 (0:00:00.191) 0:00:00.191 *********** 2025-06-02 20:18:01.329755 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:18:01.329764 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:18:01.329771 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:18:01.329778 | orchestrator | 2025-06-02 20:18:01.329786 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:18:01.329793 | orchestrator | Monday 02 June 2025 20:17:04 +0000 (0:00:00.237) 0:00:00.429 *********** 2025-06-02 20:18:01.329800 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-02 20:18:01.329808 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-02 20:18:01.329815 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-02 20:18:01.329822 | orchestrator | 2025-06-02 20:18:01.329829 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-02 20:18:01.329836 | orchestrator | 2025-06-02 20:18:01.329843 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 20:18:01.329850 | orchestrator | Monday 02 June 2025 20:17:04 +0000 (0:00:00.351) 0:00:00.780 *********** 2025-06-02 20:18:01.329857 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:18:01.329865 | orchestrator | 2025-06-02 20:18:01.329872 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-02 20:18:01.329879 | orchestrator | Monday 02 June 2025 20:17:05 +0000 (0:00:00.517) 0:00:01.297 *********** 2025-06-02 20:18:01.329887 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-02 20:18:01.329894 | orchestrator | 2025-06-02 20:18:01.329901 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-02 20:18:01.329908 | orchestrator | Monday 02 June 2025 20:17:08 +0000 (0:00:03.387) 0:00:04.685 *********** 2025-06-02 20:18:01.329915 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-02 20:18:01.329922 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-02 20:18:01.329929 | orchestrator | 2025-06-02 20:18:01.329936 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-02 20:18:01.329943 | orchestrator | Monday 02 June 2025 20:17:15 +0000 (0:00:06.513) 0:00:11.199 *********** 2025-06-02 20:18:01.329950 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:18:01.329957 | orchestrator | 2025-06-02 20:18:01.329964 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-02 20:18:01.329971 | orchestrator | Monday 02 June 2025 20:17:18 +0000 (0:00:03.221) 0:00:14.420 *********** 2025-06-02 20:18:01.329978 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:18:01.329985 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 20:18:01.329993 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 20:18:01.330059 | orchestrator | 2025-06-02 20:18:01.330068 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-02 20:18:01.330074 | orchestrator | Monday 02 June 2025 20:17:27 +0000 (0:00:08.551) 0:00:22.972 *********** 2025-06-02 20:18:01.330081 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:18:01.330087 | orchestrator | 2025-06-02 20:18:01.330106 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-02 20:18:01.330113 | orchestrator | Monday 02 June 2025 20:17:30 +0000 (0:00:03.522) 0:00:26.495 *********** 2025-06-02 20:18:01.330119 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 20:18:01.330126 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 20:18:01.330132 | orchestrator | 2025-06-02 20:18:01.330139 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-02 20:18:01.330145 | orchestrator | Monday 02 June 2025 20:17:38 +0000 (0:00:07.788) 0:00:34.283 *********** 2025-06-02 20:18:01.330152 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-02 20:18:01.330158 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-02 20:18:01.330165 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-02 20:18:01.330171 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-02 20:18:01.330178 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-02 20:18:01.330184 | orchestrator | 2025-06-02 20:18:01.330191 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 20:18:01.330198 | orchestrator | Monday 02 June 2025 20:17:54 +0000 (0:00:16.450) 0:00:50.734 *********** 2025-06-02 20:18:01.330206 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:18:01.330213 | orchestrator | 2025-06-02 20:18:01.330220 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-02 20:18:01.330226 | orchestrator | Monday 02 June 2025 20:17:55 +0000 (0:00:00.620) 0:00:51.355 *********** 2025-06-02 20:18:01.330234 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-06-02 20:18:01.330266 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1748895476.9655375-6631-168389877717557/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748895476.9655375-6631-168389877717557/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748895476.9655375-6631-168389877717557/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_drp68emd/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_drp68emd/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_drp68emd/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_drp68emd/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-06-02 20:18:01.330284 | orchestrator | 2025-06-02 20:18:01.330293 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:18:01.330301 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-06-02 20:18:01.330310 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:18:01.330319 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:18:01.330327 | orchestrator | 2025-06-02 20:18:01.330369 | orchestrator | 2025-06-02 20:18:01.330382 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:18:01.330395 | orchestrator | Monday 02 June 2025 20:17:58 +0000 (0:00:03.306) 0:00:54.661 *********** 2025-06-02 20:18:01.330413 | orchestrator | =============================================================================== 2025-06-02 20:18:01.330424 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.45s 2025-06-02 20:18:01.330432 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.55s 2025-06-02 20:18:01.330440 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.79s 2025-06-02 20:18:01.330448 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.51s 2025-06-02 20:18:01.330456 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.52s 2025-06-02 20:18:01.330464 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.39s 2025-06-02 20:18:01.330472 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.31s 2025-06-02 20:18:01.330479 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.22s 2025-06-02 20:18:01.330487 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.62s 2025-06-02 20:18:01.330495 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.52s 2025-06-02 20:18:01.330503 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-06-02 20:18:01.330517 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.24s 2025-06-02 20:18:01.330524 | orchestrator | 2025-06-02 20:18:01 | INFO  | Task b2fb0cd7-d6ce-48cc-a92d-890fe5ccb1a0 is in state SUCCESS 2025-06-02 20:18:01.330533 | orchestrator | 2025-06-02 20:18:01 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:01.332928 | orchestrator | 2025-06-02 20:18:01 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:01.333006 | orchestrator | 2025-06-02 20:18:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:04.376063 | orchestrator | 2025-06-02 20:18:04 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:04.377440 | orchestrator | 2025-06-02 20:18:04 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:04.379501 | orchestrator | 2025-06-02 20:18:04 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:04.379612 | orchestrator | 2025-06-02 20:18:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:07.418730 | orchestrator | 2025-06-02 20:18:07 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:07.418834 | orchestrator | 2025-06-02 20:18:07 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:07.419620 | orchestrator | 2025-06-02 20:18:07 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:07.419647 | orchestrator | 2025-06-02 20:18:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:10.468040 | orchestrator | 2025-06-02 20:18:10 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:10.469506 | orchestrator | 2025-06-02 20:18:10 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:10.472087 | orchestrator | 2025-06-02 20:18:10 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:10.472167 | orchestrator | 2025-06-02 20:18:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:13.522424 | orchestrator | 2025-06-02 20:18:13 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:13.524143 | orchestrator | 2025-06-02 20:18:13 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:13.526460 | orchestrator | 2025-06-02 20:18:13 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:13.526500 | orchestrator | 2025-06-02 20:18:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:16.562739 | orchestrator | 2025-06-02 20:18:16 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:16.562812 | orchestrator | 2025-06-02 20:18:16 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:16.563799 | orchestrator | 2025-06-02 20:18:16 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:16.563867 | orchestrator | 2025-06-02 20:18:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:19.615881 | orchestrator | 2025-06-02 20:18:19 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:19.617848 | orchestrator | 2025-06-02 20:18:19 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:19.619423 | orchestrator | 2025-06-02 20:18:19 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:19.619480 | orchestrator | 2025-06-02 20:18:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:22.658691 | orchestrator | 2025-06-02 20:18:22 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:22.659479 | orchestrator | 2025-06-02 20:18:22 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:22.660701 | orchestrator | 2025-06-02 20:18:22 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:22.660730 | orchestrator | 2025-06-02 20:18:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:25.696195 | orchestrator | 2025-06-02 20:18:25 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:25.697608 | orchestrator | 2025-06-02 20:18:25 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:25.699405 | orchestrator | 2025-06-02 20:18:25 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:25.699451 | orchestrator | 2025-06-02 20:18:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:28.731996 | orchestrator | 2025-06-02 20:18:28 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:28.732100 | orchestrator | 2025-06-02 20:18:28 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:28.733065 | orchestrator | 2025-06-02 20:18:28 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:28.733132 | orchestrator | 2025-06-02 20:18:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:31.785912 | orchestrator | 2025-06-02 20:18:31 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:31.789447 | orchestrator | 2025-06-02 20:18:31 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:31.790986 | orchestrator | 2025-06-02 20:18:31 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:31.791013 | orchestrator | 2025-06-02 20:18:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:34.826093 | orchestrator | 2025-06-02 20:18:34 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:34.826554 | orchestrator | 2025-06-02 20:18:34 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:34.827396 | orchestrator | 2025-06-02 20:18:34 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:34.827524 | orchestrator | 2025-06-02 20:18:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:37.872079 | orchestrator | 2025-06-02 20:18:37 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:37.873246 | orchestrator | 2025-06-02 20:18:37 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:37.875883 | orchestrator | 2025-06-02 20:18:37 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:37.875961 | orchestrator | 2025-06-02 20:18:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:40.922188 | orchestrator | 2025-06-02 20:18:40 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:40.924225 | orchestrator | 2025-06-02 20:18:40 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:40.924757 | orchestrator | 2025-06-02 20:18:40 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:40.925580 | orchestrator | 2025-06-02 20:18:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:43.972946 | orchestrator | 2025-06-02 20:18:43 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:43.974712 | orchestrator | 2025-06-02 20:18:43 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:43.976992 | orchestrator | 2025-06-02 20:18:43 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:43.977043 | orchestrator | 2025-06-02 20:18:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:47.027729 | orchestrator | 2025-06-02 20:18:47 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:47.031983 | orchestrator | 2025-06-02 20:18:47 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:47.033569 | orchestrator | 2025-06-02 20:18:47 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:47.033777 | orchestrator | 2025-06-02 20:18:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:50.070083 | orchestrator | 2025-06-02 20:18:50 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:50.071331 | orchestrator | 2025-06-02 20:18:50 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:50.072276 | orchestrator | 2025-06-02 20:18:50 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:50.072330 | orchestrator | 2025-06-02 20:18:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:53.110748 | orchestrator | 2025-06-02 20:18:53 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:53.112131 | orchestrator | 2025-06-02 20:18:53 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:53.114135 | orchestrator | 2025-06-02 20:18:53 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:53.114179 | orchestrator | 2025-06-02 20:18:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:56.152136 | orchestrator | 2025-06-02 20:18:56 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:56.152449 | orchestrator | 2025-06-02 20:18:56 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:56.153284 | orchestrator | 2025-06-02 20:18:56 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:56.153403 | orchestrator | 2025-06-02 20:18:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:18:59.196619 | orchestrator | 2025-06-02 20:18:59 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:18:59.197814 | orchestrator | 2025-06-02 20:18:59 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:18:59.199264 | orchestrator | 2025-06-02 20:18:59 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:18:59.199355 | orchestrator | 2025-06-02 20:18:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:02.243809 | orchestrator | 2025-06-02 20:19:02 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:02.244987 | orchestrator | 2025-06-02 20:19:02 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:19:02.247388 | orchestrator | 2025-06-02 20:19:02 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:02.247436 | orchestrator | 2025-06-02 20:19:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:05.290973 | orchestrator | 2025-06-02 20:19:05 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:05.291851 | orchestrator | 2025-06-02 20:19:05 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:19:05.294812 | orchestrator | 2025-06-02 20:19:05 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:05.294869 | orchestrator | 2025-06-02 20:19:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:08.333941 | orchestrator | 2025-06-02 20:19:08 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:08.335611 | orchestrator | 2025-06-02 20:19:08 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:19:08.337879 | orchestrator | 2025-06-02 20:19:08 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:08.337931 | orchestrator | 2025-06-02 20:19:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:11.378719 | orchestrator | 2025-06-02 20:19:11 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:11.380782 | orchestrator | 2025-06-02 20:19:11 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:19:11.382976 | orchestrator | 2025-06-02 20:19:11 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:11.383394 | orchestrator | 2025-06-02 20:19:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:14.432252 | orchestrator | 2025-06-02 20:19:14 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:14.433919 | orchestrator | 2025-06-02 20:19:14 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:19:14.435880 | orchestrator | 2025-06-02 20:19:14 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:14.435961 | orchestrator | 2025-06-02 20:19:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:17.477737 | orchestrator | 2025-06-02 20:19:17 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:17.478133 | orchestrator | 2025-06-02 20:19:17 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state STARTED 2025-06-02 20:19:17.479250 | orchestrator | 2025-06-02 20:19:17 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:17.479339 | orchestrator | 2025-06-02 20:19:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:20.519179 | orchestrator | 2025-06-02 20:19:20 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:20.519251 | orchestrator | 2025-06-02 20:19:20 | INFO  | Task 31959657-f934-486f-ae97-741c6a3dbb5e is in state SUCCESS 2025-06-02 20:19:20.519701 | orchestrator | 2025-06-02 20:19:20 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:20.519785 | orchestrator | 2025-06-02 20:19:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:23.564847 | orchestrator | 2025-06-02 20:19:23 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:23.568934 | orchestrator | 2025-06-02 20:19:23 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:23.569006 | orchestrator | 2025-06-02 20:19:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:26.599808 | orchestrator | 2025-06-02 20:19:26 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:26.600378 | orchestrator | 2025-06-02 20:19:26 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:26.600413 | orchestrator | 2025-06-02 20:19:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:29.645717 | orchestrator | 2025-06-02 20:19:29 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:29.645994 | orchestrator | 2025-06-02 20:19:29 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:29.646006 | orchestrator | 2025-06-02 20:19:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:32.694378 | orchestrator | 2025-06-02 20:19:32 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:32.695226 | orchestrator | 2025-06-02 20:19:32 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:32.695715 | orchestrator | 2025-06-02 20:19:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:35.738431 | orchestrator | 2025-06-02 20:19:35 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:35.738916 | orchestrator | 2025-06-02 20:19:35 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:35.738936 | orchestrator | 2025-06-02 20:19:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:38.787211 | orchestrator | 2025-06-02 20:19:38 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:38.789356 | orchestrator | 2025-06-02 20:19:38 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:38.789409 | orchestrator | 2025-06-02 20:19:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:41.844349 | orchestrator | 2025-06-02 20:19:41 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:41.845476 | orchestrator | 2025-06-02 20:19:41 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:41.845510 | orchestrator | 2025-06-02 20:19:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:44.887264 | orchestrator | 2025-06-02 20:19:44 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:44.888777 | orchestrator | 2025-06-02 20:19:44 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:44.888832 | orchestrator | 2025-06-02 20:19:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:47.918926 | orchestrator | 2025-06-02 20:19:47 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:47.920997 | orchestrator | 2025-06-02 20:19:47 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state STARTED 2025-06-02 20:19:47.921050 | orchestrator | 2025-06-02 20:19:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:50.969713 | orchestrator | 2025-06-02 20:19:50 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:50.973871 | orchestrator | 2025-06-02 20:19:50 | INFO  | Task 161833d5-2ecf-4bc2-9e8e-dda197c4bd35 is in state SUCCESS 2025-06-02 20:19:50.975166 | orchestrator | 2025-06-02 20:19:50.975220 | orchestrator | 2025-06-02 20:19:50.975228 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:19:50.975338 | orchestrator | 2025-06-02 20:19:50.975348 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:19:50.975355 | orchestrator | Monday 02 June 2025 20:15:47 +0000 (0:00:00.325) 0:00:00.326 *********** 2025-06-02 20:19:50.975362 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:50.975370 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:19:50.975376 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:19:50.975382 | orchestrator | 2025-06-02 20:19:50.975389 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:19:50.975423 | orchestrator | Monday 02 June 2025 20:15:48 +0000 (0:00:00.254) 0:00:00.580 *********** 2025-06-02 20:19:50.975430 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 20:19:50.975436 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 20:19:50.975442 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 20:19:50.975449 | orchestrator | 2025-06-02 20:19:50.975456 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-02 20:19:50.975463 | orchestrator | 2025-06-02 20:19:50.975469 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-02 20:19:50.975476 | orchestrator | Monday 02 June 2025 20:15:48 +0000 (0:00:00.619) 0:00:01.199 *********** 2025-06-02 20:19:50.975483 | orchestrator | 2025-06-02 20:19:50.975489 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-02 20:19:50.975495 | orchestrator | 2025-06-02 20:19:50.975501 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-02 20:19:50.975508 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:50.975545 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:19:50.975552 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:19:50.975559 | orchestrator | 2025-06-02 20:19:50.975565 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:19:50.975573 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:19:50.975580 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:19:50.975586 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:19:50.975592 | orchestrator | 2025-06-02 20:19:50.975597 | orchestrator | 2025-06-02 20:19:50.975603 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:19:50.975610 | orchestrator | Monday 02 June 2025 20:19:18 +0000 (0:03:30.035) 0:03:31.234 *********** 2025-06-02 20:19:50.975617 | orchestrator | =============================================================================== 2025-06-02 20:19:50.975625 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 210.04s 2025-06-02 20:19:50.975632 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-06-02 20:19:50.975639 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-06-02 20:19:50.975645 | orchestrator | 2025-06-02 20:19:50.975651 | orchestrator | 2025-06-02 20:19:50.975656 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:19:50.975661 | orchestrator | 2025-06-02 20:19:50.975666 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:19:50.975685 | orchestrator | Monday 02 June 2025 20:17:45 +0000 (0:00:00.230) 0:00:00.230 *********** 2025-06-02 20:19:50.975691 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:50.975696 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:19:50.975701 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:19:50.975707 | orchestrator | 2025-06-02 20:19:50.975712 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:19:50.975718 | orchestrator | Monday 02 June 2025 20:17:45 +0000 (0:00:00.249) 0:00:00.479 *********** 2025-06-02 20:19:50.975723 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-02 20:19:50.975729 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-02 20:19:50.975735 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-02 20:19:50.975742 | orchestrator | 2025-06-02 20:19:50.975748 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-02 20:19:50.975755 | orchestrator | 2025-06-02 20:19:50.975761 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 20:19:50.975767 | orchestrator | Monday 02 June 2025 20:17:46 +0000 (0:00:00.353) 0:00:00.833 *********** 2025-06-02 20:19:50.975784 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:19:50.975791 | orchestrator | 2025-06-02 20:19:50.975797 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-02 20:19:50.975803 | orchestrator | Monday 02 June 2025 20:17:46 +0000 (0:00:00.444) 0:00:01.277 *********** 2025-06-02 20:19:50.975814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.975840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.975847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.975853 | orchestrator | 2025-06-02 20:19:50.975859 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-02 20:19:50.975865 | orchestrator | Monday 02 June 2025 20:17:47 +0000 (0:00:00.712) 0:00:01.989 *********** 2025-06-02 20:19:50.975872 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-02 20:19:50.975880 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-02 20:19:50.975886 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:19:50.975893 | orchestrator | 2025-06-02 20:19:50.975899 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 20:19:50.975905 | orchestrator | Monday 02 June 2025 20:17:48 +0000 (0:00:00.730) 0:00:02.720 *********** 2025-06-02 20:19:50.975913 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:19:50.975920 | orchestrator | 2025-06-02 20:19:50.975926 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-02 20:19:50.976047 | orchestrator | Monday 02 June 2025 20:17:48 +0000 (0:00:00.607) 0:00:03.328 *********** 2025-06-02 20:19:50.976066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.976083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.976100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.976107 | orchestrator | 2025-06-02 20:19:50.976113 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-02 20:19:50.976119 | orchestrator | Monday 02 June 2025 20:17:49 +0000 (0:00:01.300) 0:00:04.628 *********** 2025-06-02 20:19:50.976125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:19:50.976132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:19:50.976138 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:50.976144 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:50.976154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:19:50.976166 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:50.976172 | orchestrator | 2025-06-02 20:19:50.976178 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-02 20:19:50.976183 | orchestrator | Monday 02 June 2025 20:17:50 +0000 (0:00:00.348) 0:00:04.976 *********** 2025-06-02 20:19:50.976189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:19:50.976195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:19:50.976201 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:50.976207 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:50.976218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 20:19:50.976224 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:50.976230 | orchestrator | 2025-06-02 20:19:50.976297 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-02 20:19:50.976305 | orchestrator | Monday 02 June 2025 20:17:50 +0000 (0:00:00.654) 0:00:05.630 *********** 2025-06-02 20:19:50.976311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.976317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.976333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.976339 | orchestrator | 2025-06-02 20:19:50.976345 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-02 20:19:50.976350 | orchestrator | Monday 02 June 2025 20:17:52 +0000 (0:00:01.186) 0:00:06.817 *********** 2025-06-02 20:19:50.976356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.976368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.976374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.976380 | orchestrator | 2025-06-02 20:19:50.976385 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-02 20:19:50.976391 | orchestrator | Monday 02 June 2025 20:17:53 +0000 (0:00:01.321) 0:00:08.138 *********** 2025-06-02 20:19:50.976396 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:50.976402 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:50.976408 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:50.976413 | orchestrator | 2025-06-02 20:19:50.976419 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-02 20:19:50.976430 | orchestrator | Monday 02 June 2025 20:17:54 +0000 (0:00:00.547) 0:00:08.686 *********** 2025-06-02 20:19:50.976436 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 20:19:50.976442 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 20:19:50.976447 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 20:19:50.976452 | orchestrator | 2025-06-02 20:19:50.976458 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-02 20:19:50.976463 | orchestrator | Monday 02 June 2025 20:17:55 +0000 (0:00:01.272) 0:00:09.959 *********** 2025-06-02 20:19:50.976469 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 20:19:50.976476 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 20:19:50.976481 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 20:19:50.976487 | orchestrator | 2025-06-02 20:19:50.976502 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-02 20:19:50.976509 | orchestrator | Monday 02 June 2025 20:17:56 +0000 (0:00:01.253) 0:00:11.213 *********** 2025-06-02 20:19:50.976515 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:19:50.976520 | orchestrator | 2025-06-02 20:19:50.976526 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-02 20:19:50.976532 | orchestrator | Monday 02 June 2025 20:17:57 +0000 (0:00:00.754) 0:00:11.967 *********** 2025-06-02 20:19:50.976538 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-02 20:19:50.976545 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-02 20:19:50.976551 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:50.976557 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:19:50.976563 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:19:50.976569 | orchestrator | 2025-06-02 20:19:50.976574 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-02 20:19:50.976580 | orchestrator | Monday 02 June 2025 20:17:57 +0000 (0:00:00.689) 0:00:12.657 *********** 2025-06-02 20:19:50.976585 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:50.976591 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:50.976597 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:50.976602 | orchestrator | 2025-06-02 20:19:50.976607 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-02 20:19:50.976613 | orchestrator | Monday 02 June 2025 20:17:58 +0000 (0:00:00.559) 0:00:13.216 *********** 2025-06-02 20:19:50.976620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1089248, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6090467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1089248, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6090467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1089248, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6090467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1089226, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6050465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1089226, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6050465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1089226, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6050465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1089218, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6020465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1089218, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6020465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1089218, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6020465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1089240, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6060467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1089240, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6060467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1089240, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6060467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1089200, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6000464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1089200, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6000464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1089200, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6000464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1089220, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6030467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1089220, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6030467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1089220, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6030467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1089237, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6060467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.976789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1089237, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6060467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1089237, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6060467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1089197, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5990465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1089197, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5990465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1089197, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5990465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1089166, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5950465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1089166, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5950465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1089166, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5950465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1089205, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6000464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1089205, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6000464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1089205, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6000464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1089178, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5970464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1089178, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5970464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1089178, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5970464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1089229, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6050465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1089229, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6050465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1089229, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6050465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1089210, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6010466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1089210, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6010466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1089210, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6010466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1089241, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6070466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1089241, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6070466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1089241, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6070466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1089189, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5990465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1089189, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5990465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1089189, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5990465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1089224, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6040466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1089224, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6040466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1089224, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6040466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1089168, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5960464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1089168, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5960464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1089168, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5960464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1089184, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5980465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1089184, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5980465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1089184, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.5980465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1089216, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6020465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1089216, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6020465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1089216, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6020465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1089331, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.628047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1089331, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.628047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1089331, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.628047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1089319, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.622047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1089319, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.622047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1089319, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.622047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1089253, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6090467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1089253, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6090467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1089253, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6090467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1089624, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.688048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1089624, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.688048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1089624, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.688048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1089254, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6100466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1089254, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6100466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1089254, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6100466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1089613, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6860478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1089613, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6860478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1089613, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6860478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1089628, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.690048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1089628, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.690048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1089628, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.690048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1089351, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.629047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1089351, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.629047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.977999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1089351, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.629047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1089361, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6850479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1089361, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6850479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1089361, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6850479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1089263, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6130466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1089263, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6130466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1089263, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6130466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1089323, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6230469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1089323, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6230469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1089323, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6230469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1089639, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.691048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1089639, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.691048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1089639, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.691048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1089615, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.687048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1089615, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.687048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1089615, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.687048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1089273, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6160467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1089273, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6160467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1089273, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6160467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1089270, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6140468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1089270, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6140468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1089270, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6140468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1089289, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6170468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1089289, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6170468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1089289, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6170468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1089295, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6210468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1089295, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6210468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1089295, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6210468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1089326, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6230469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1089326, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6230469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1089326, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6230469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1089357, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.630047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1089357, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.630047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1089357, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.630047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1089330, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6230469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1089330, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6230469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1089330, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.6230469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1089651, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.693048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1089651, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.693048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1089651, 'dev': 113, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748892698.693048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 20:19:50.978437 | orchestrator | 2025-06-02 20:19:50.978444 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-02 20:19:50.978452 | orchestrator | Monday 02 June 2025 20:18:35 +0000 (0:00:37.075) 0:00:50.292 *********** 2025-06-02 20:19:50.978463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.978471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.978484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 20:19:50.978491 | orchestrator | 2025-06-02 20:19:50.978498 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-02 20:19:50.978505 | orchestrator | Monday 02 June 2025 20:18:36 +0000 (0:00:01.170) 0:00:51.463 *********** 2025-06-02 20:19:50.978512 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:50.978520 | orchestrator | 2025-06-02 20:19:50.978527 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-02 20:19:50.978533 | orchestrator | Monday 02 June 2025 20:18:38 +0000 (0:00:02.107) 0:00:53.570 *********** 2025-06-02 20:19:50.978540 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:50.978546 | orchestrator | 2025-06-02 20:19:50.978551 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 20:19:50.978560 | orchestrator | Monday 02 June 2025 20:18:41 +0000 (0:00:02.120) 0:00:55.691 *********** 2025-06-02 20:19:50.978566 | orchestrator | 2025-06-02 20:19:50.978572 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 20:19:50.978578 | orchestrator | Monday 02 June 2025 20:18:41 +0000 (0:00:00.260) 0:00:55.951 *********** 2025-06-02 20:19:50.978583 | orchestrator | 2025-06-02 20:19:50.978589 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 20:19:50.978595 | orchestrator | Monday 02 June 2025 20:18:41 +0000 (0:00:00.062) 0:00:56.014 *********** 2025-06-02 20:19:50.978600 | orchestrator | 2025-06-02 20:19:50.978606 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-02 20:19:50.978612 | orchestrator | Monday 02 June 2025 20:18:41 +0000 (0:00:00.064) 0:00:56.079 *********** 2025-06-02 20:19:50.978618 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:50.978624 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:50.978630 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:19:50.978637 | orchestrator | 2025-06-02 20:19:50.978643 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-02 20:19:50.978650 | orchestrator | Monday 02 June 2025 20:18:43 +0000 (0:00:01.910) 0:00:57.990 *********** 2025-06-02 20:19:50.978657 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:50.978664 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:50.978671 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-02 20:19:50.978678 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-02 20:19:50.978685 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-02 20:19:50.978692 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:50.978700 | orchestrator | 2025-06-02 20:19:50.978706 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-02 20:19:50.978713 | orchestrator | Monday 02 June 2025 20:19:21 +0000 (0:00:38.507) 0:01:36.497 *********** 2025-06-02 20:19:50.978720 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:50.978727 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:19:50.978734 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:19:50.978746 | orchestrator | 2025-06-02 20:19:50.978753 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-02 20:19:50.978760 | orchestrator | Monday 02 June 2025 20:19:43 +0000 (0:00:21.209) 0:01:57.707 *********** 2025-06-02 20:19:50.978767 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:19:50.978774 | orchestrator | 2025-06-02 20:19:50.978784 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-02 20:19:50.978791 | orchestrator | Monday 02 June 2025 20:19:45 +0000 (0:00:02.634) 0:02:00.341 *********** 2025-06-02 20:19:50.978797 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:50.978804 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:19:50.978810 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:19:50.978817 | orchestrator | 2025-06-02 20:19:50.978824 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-02 20:19:50.978830 | orchestrator | Monday 02 June 2025 20:19:45 +0000 (0:00:00.273) 0:02:00.615 *********** 2025-06-02 20:19:50.978837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-02 20:19:50.978845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-02 20:19:50.978852 | orchestrator | 2025-06-02 20:19:50.978858 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-02 20:19:50.978864 | orchestrator | Monday 02 June 2025 20:19:48 +0000 (0:00:02.475) 0:02:03.090 *********** 2025-06-02 20:19:50.978871 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:19:50.978879 | orchestrator | 2025-06-02 20:19:50.978885 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:19:50.978892 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 20:19:50.978901 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 20:19:50.978907 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 20:19:50.978914 | orchestrator | 2025-06-02 20:19:50.978921 | orchestrator | 2025-06-02 20:19:50.978927 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:19:50.978934 | orchestrator | Monday 02 June 2025 20:19:48 +0000 (0:00:00.281) 0:02:03.371 *********** 2025-06-02 20:19:50.978940 | orchestrator | =============================================================================== 2025-06-02 20:19:50.978947 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.51s 2025-06-02 20:19:50.978954 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.08s 2025-06-02 20:19:50.978960 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 21.21s 2025-06-02 20:19:50.978971 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.63s 2025-06-02 20:19:50.978977 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.48s 2025-06-02 20:19:50.978983 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.12s 2025-06-02 20:19:50.978990 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.11s 2025-06-02 20:19:50.978996 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.91s 2025-06-02 20:19:50.979004 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.32s 2025-06-02 20:19:50.979018 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.30s 2025-06-02 20:19:50.979024 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2025-06-02 20:19:50.979031 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.25s 2025-06-02 20:19:50.979038 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.19s 2025-06-02 20:19:50.979044 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.17s 2025-06-02 20:19:50.979051 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.75s 2025-06-02 20:19:50.979058 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.73s 2025-06-02 20:19:50.979064 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.71s 2025-06-02 20:19:50.979070 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2025-06-02 20:19:50.979076 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.65s 2025-06-02 20:19:50.979083 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.61s 2025-06-02 20:19:50.979090 | orchestrator | 2025-06-02 20:19:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:54.022723 | orchestrator | 2025-06-02 20:19:54 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:54.022821 | orchestrator | 2025-06-02 20:19:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:19:57.067495 | orchestrator | 2025-06-02 20:19:57 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:19:57.067578 | orchestrator | 2025-06-02 20:19:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:00.110446 | orchestrator | 2025-06-02 20:20:00 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:00.110520 | orchestrator | 2025-06-02 20:20:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:03.155782 | orchestrator | 2025-06-02 20:20:03 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:03.155894 | orchestrator | 2025-06-02 20:20:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:06.181565 | orchestrator | 2025-06-02 20:20:06 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:06.181664 | orchestrator | 2025-06-02 20:20:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:09.206674 | orchestrator | 2025-06-02 20:20:09 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:09.206781 | orchestrator | 2025-06-02 20:20:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:12.236662 | orchestrator | 2025-06-02 20:20:12 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:12.236770 | orchestrator | 2025-06-02 20:20:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:15.274089 | orchestrator | 2025-06-02 20:20:15 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:15.274183 | orchestrator | 2025-06-02 20:20:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:18.314476 | orchestrator | 2025-06-02 20:20:18 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:18.314569 | orchestrator | 2025-06-02 20:20:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:21.356799 | orchestrator | 2025-06-02 20:20:21 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:21.356887 | orchestrator | 2025-06-02 20:20:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:24.399812 | orchestrator | 2025-06-02 20:20:24 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:24.399940 | orchestrator | 2025-06-02 20:20:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:27.444614 | orchestrator | 2025-06-02 20:20:27 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:27.444753 | orchestrator | 2025-06-02 20:20:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:30.484170 | orchestrator | 2025-06-02 20:20:30 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:30.484362 | orchestrator | 2025-06-02 20:20:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:33.526930 | orchestrator | 2025-06-02 20:20:33 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:33.526997 | orchestrator | 2025-06-02 20:20:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:36.575520 | orchestrator | 2025-06-02 20:20:36 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:36.575593 | orchestrator | 2025-06-02 20:20:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:39.627134 | orchestrator | 2025-06-02 20:20:39 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:39.627318 | orchestrator | 2025-06-02 20:20:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:42.670682 | orchestrator | 2025-06-02 20:20:42 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:42.670784 | orchestrator | 2025-06-02 20:20:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:45.717059 | orchestrator | 2025-06-02 20:20:45 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:45.717163 | orchestrator | 2025-06-02 20:20:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:48.771825 | orchestrator | 2025-06-02 20:20:48 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:48.771914 | orchestrator | 2025-06-02 20:20:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:51.831104 | orchestrator | 2025-06-02 20:20:51 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:51.831266 | orchestrator | 2025-06-02 20:20:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:54.878760 | orchestrator | 2025-06-02 20:20:54 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:54.878858 | orchestrator | 2025-06-02 20:20:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:20:57.920988 | orchestrator | 2025-06-02 20:20:57 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:20:57.921145 | orchestrator | 2025-06-02 20:20:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:00.962889 | orchestrator | 2025-06-02 20:21:00 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:00.963039 | orchestrator | 2025-06-02 20:21:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:04.007588 | orchestrator | 2025-06-02 20:21:04 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:04.007716 | orchestrator | 2025-06-02 20:21:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:07.060357 | orchestrator | 2025-06-02 20:21:07 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:07.060468 | orchestrator | 2025-06-02 20:21:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:10.112058 | orchestrator | 2025-06-02 20:21:10 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:10.112142 | orchestrator | 2025-06-02 20:21:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:13.150554 | orchestrator | 2025-06-02 20:21:13 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:13.150642 | orchestrator | 2025-06-02 20:21:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:16.189075 | orchestrator | 2025-06-02 20:21:16 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:16.189147 | orchestrator | 2025-06-02 20:21:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:19.228678 | orchestrator | 2025-06-02 20:21:19 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:19.228760 | orchestrator | 2025-06-02 20:21:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:22.272195 | orchestrator | 2025-06-02 20:21:22 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:22.272286 | orchestrator | 2025-06-02 20:21:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:25.312706 | orchestrator | 2025-06-02 20:21:25 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:25.312787 | orchestrator | 2025-06-02 20:21:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:28.372874 | orchestrator | 2025-06-02 20:21:28 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:28.372945 | orchestrator | 2025-06-02 20:21:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:31.423799 | orchestrator | 2025-06-02 20:21:31 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:31.423892 | orchestrator | 2025-06-02 20:21:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:34.463778 | orchestrator | 2025-06-02 20:21:34 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:34.463862 | orchestrator | 2025-06-02 20:21:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:37.497723 | orchestrator | 2025-06-02 20:21:37 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:37.497806 | orchestrator | 2025-06-02 20:21:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:40.537684 | orchestrator | 2025-06-02 20:21:40 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:40.537762 | orchestrator | 2025-06-02 20:21:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:43.577228 | orchestrator | 2025-06-02 20:21:43 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:43.577323 | orchestrator | 2025-06-02 20:21:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:46.618331 | orchestrator | 2025-06-02 20:21:46 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:46.618428 | orchestrator | 2025-06-02 20:21:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:49.667490 | orchestrator | 2025-06-02 20:21:49 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:49.667576 | orchestrator | 2025-06-02 20:21:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:52.712021 | orchestrator | 2025-06-02 20:21:52 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:52.712099 | orchestrator | 2025-06-02 20:21:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:55.757276 | orchestrator | 2025-06-02 20:21:55 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:55.757405 | orchestrator | 2025-06-02 20:21:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:21:58.798074 | orchestrator | 2025-06-02 20:21:58 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:21:58.798186 | orchestrator | 2025-06-02 20:21:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:01.840280 | orchestrator | 2025-06-02 20:22:01 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:01.840353 | orchestrator | 2025-06-02 20:22:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:04.882634 | orchestrator | 2025-06-02 20:22:04 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:04.882734 | orchestrator | 2025-06-02 20:22:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:07.934565 | orchestrator | 2025-06-02 20:22:07 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:07.934687 | orchestrator | 2025-06-02 20:22:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:10.990898 | orchestrator | 2025-06-02 20:22:10 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:10.991009 | orchestrator | 2025-06-02 20:22:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:14.033769 | orchestrator | 2025-06-02 20:22:14 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:14.033858 | orchestrator | 2025-06-02 20:22:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:17.074978 | orchestrator | 2025-06-02 20:22:17 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:17.075051 | orchestrator | 2025-06-02 20:22:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:20.114437 | orchestrator | 2025-06-02 20:22:20 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:20.114544 | orchestrator | 2025-06-02 20:22:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:23.158086 | orchestrator | 2025-06-02 20:22:23 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:23.158174 | orchestrator | 2025-06-02 20:22:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:26.204767 | orchestrator | 2025-06-02 20:22:26 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:26.204850 | orchestrator | 2025-06-02 20:22:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:29.251534 | orchestrator | 2025-06-02 20:22:29 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:29.251658 | orchestrator | 2025-06-02 20:22:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:32.285849 | orchestrator | 2025-06-02 20:22:32 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:32.285982 | orchestrator | 2025-06-02 20:22:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:35.318465 | orchestrator | 2025-06-02 20:22:35 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:35.318554 | orchestrator | 2025-06-02 20:22:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:38.357802 | orchestrator | 2025-06-02 20:22:38 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:38.357893 | orchestrator | 2025-06-02 20:22:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:41.409596 | orchestrator | 2025-06-02 20:22:41 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:41.409704 | orchestrator | 2025-06-02 20:22:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:44.456650 | orchestrator | 2025-06-02 20:22:44 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:44.456742 | orchestrator | 2025-06-02 20:22:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:47.500067 | orchestrator | 2025-06-02 20:22:47 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:47.500165 | orchestrator | 2025-06-02 20:22:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:50.548340 | orchestrator | 2025-06-02 20:22:50 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:50.548409 | orchestrator | 2025-06-02 20:22:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:53.599809 | orchestrator | 2025-06-02 20:22:53 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:53.600851 | orchestrator | 2025-06-02 20:22:53 | INFO  | Task 456be27b-b241-4334-8e1a-aac7d9b39f5e is in state STARTED 2025-06-02 20:22:53.600913 | orchestrator | 2025-06-02 20:22:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:56.655448 | orchestrator | 2025-06-02 20:22:56 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:56.658211 | orchestrator | 2025-06-02 20:22:56 | INFO  | Task 456be27b-b241-4334-8e1a-aac7d9b39f5e is in state STARTED 2025-06-02 20:22:56.658268 | orchestrator | 2025-06-02 20:22:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:22:59.698909 | orchestrator | 2025-06-02 20:22:59 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:22:59.699432 | orchestrator | 2025-06-02 20:22:59 | INFO  | Task 456be27b-b241-4334-8e1a-aac7d9b39f5e is in state STARTED 2025-06-02 20:22:59.699457 | orchestrator | 2025-06-02 20:22:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:02.740612 | orchestrator | 2025-06-02 20:23:02 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:02.740712 | orchestrator | 2025-06-02 20:23:02 | INFO  | Task 456be27b-b241-4334-8e1a-aac7d9b39f5e is in state STARTED 2025-06-02 20:23:02.740727 | orchestrator | 2025-06-02 20:23:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:05.768202 | orchestrator | 2025-06-02 20:23:05 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:05.768274 | orchestrator | 2025-06-02 20:23:05 | INFO  | Task 456be27b-b241-4334-8e1a-aac7d9b39f5e is in state STARTED 2025-06-02 20:23:05.769924 | orchestrator | 2025-06-02 20:23:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:08.805560 | orchestrator | 2025-06-02 20:23:08 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:08.805648 | orchestrator | 2025-06-02 20:23:08 | INFO  | Task 456be27b-b241-4334-8e1a-aac7d9b39f5e is in state STARTED 2025-06-02 20:23:08.805659 | orchestrator | 2025-06-02 20:23:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:11.856730 | orchestrator | 2025-06-02 20:23:11 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:11.857787 | orchestrator | 2025-06-02 20:23:11 | INFO  | Task 456be27b-b241-4334-8e1a-aac7d9b39f5e is in state SUCCESS 2025-06-02 20:23:11.857836 | orchestrator | 2025-06-02 20:23:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:14.897163 | orchestrator | 2025-06-02 20:23:14 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:14.897263 | orchestrator | 2025-06-02 20:23:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:17.939704 | orchestrator | 2025-06-02 20:23:17 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:17.939809 | orchestrator | 2025-06-02 20:23:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:20.982626 | orchestrator | 2025-06-02 20:23:20 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:20.982746 | orchestrator | 2025-06-02 20:23:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:24.048577 | orchestrator | 2025-06-02 20:23:24 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:24.048687 | orchestrator | 2025-06-02 20:23:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:27.092895 | orchestrator | 2025-06-02 20:23:27 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:27.093042 | orchestrator | 2025-06-02 20:23:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:30.130276 | orchestrator | 2025-06-02 20:23:30 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:30.130380 | orchestrator | 2025-06-02 20:23:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:33.165818 | orchestrator | 2025-06-02 20:23:33 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:33.165889 | orchestrator | 2025-06-02 20:23:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:36.206475 | orchestrator | 2025-06-02 20:23:36 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:36.206576 | orchestrator | 2025-06-02 20:23:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:39.254606 | orchestrator | 2025-06-02 20:23:39 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state STARTED 2025-06-02 20:23:39.254694 | orchestrator | 2025-06-02 20:23:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 20:23:42.309453 | orchestrator | 2025-06-02 20:23:42.309619 | orchestrator | None 2025-06-02 20:23:42.309637 | orchestrator | 2025-06-02 20:23:42.309677 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:23:42.309691 | orchestrator | 2025-06-02 20:23:42.309703 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-02 20:23:42.309714 | orchestrator | Monday 02 June 2025 20:15:14 +0000 (0:00:00.645) 0:00:00.645 *********** 2025-06-02 20:23:42.309725 | orchestrator | changed: [testbed-manager] 2025-06-02 20:23:42.309737 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.309748 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:23:42.309759 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:23:42.309770 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.309781 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.309791 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.309802 | orchestrator | 2025-06-02 20:23:42.309835 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:23:42.309847 | orchestrator | Monday 02 June 2025 20:15:15 +0000 (0:00:01.313) 0:00:01.959 *********** 2025-06-02 20:23:42.309857 | orchestrator | changed: [testbed-manager] 2025-06-02 20:23:42.309869 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.309880 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:23:42.310119 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:23:42.310174 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.310188 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.310200 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.310297 | orchestrator | 2025-06-02 20:23:42.310313 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:23:42.310326 | orchestrator | Monday 02 June 2025 20:15:16 +0000 (0:00:00.754) 0:00:02.713 *********** 2025-06-02 20:23:42.310338 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-02 20:23:42.310349 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 20:23:42.310360 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 20:23:42.310370 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 20:23:42.310381 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-02 20:23:42.310419 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-02 20:23:42.310430 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-02 20:23:42.310441 | orchestrator | 2025-06-02 20:23:42.310451 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-02 20:23:42.310462 | orchestrator | 2025-06-02 20:23:42.310473 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 20:23:42.310484 | orchestrator | Monday 02 June 2025 20:15:17 +0000 (0:00:01.171) 0:00:03.884 *********** 2025-06-02 20:23:42.310494 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:23:42.310505 | orchestrator | 2025-06-02 20:23:42.310516 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-02 20:23:42.310541 | orchestrator | Monday 02 June 2025 20:15:18 +0000 (0:00:00.751) 0:00:04.636 *********** 2025-06-02 20:23:42.310553 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-02 20:23:42.310564 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-02 20:23:42.310575 | orchestrator | 2025-06-02 20:23:42.310585 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-02 20:23:42.310596 | orchestrator | Monday 02 June 2025 20:15:22 +0000 (0:00:04.016) 0:00:08.653 *********** 2025-06-02 20:23:42.310607 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:23:42.310618 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 20:23:42.310628 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.310639 | orchestrator | 2025-06-02 20:23:42.310700 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 20:23:42.310712 | orchestrator | Monday 02 June 2025 20:15:26 +0000 (0:00:04.237) 0:00:12.891 *********** 2025-06-02 20:23:42.310723 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.310733 | orchestrator | 2025-06-02 20:23:42.310744 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-02 20:23:42.310755 | orchestrator | Monday 02 June 2025 20:15:27 +0000 (0:00:01.128) 0:00:14.020 *********** 2025-06-02 20:23:42.310766 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.310797 | orchestrator | 2025-06-02 20:23:42.310808 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-02 20:23:42.310819 | orchestrator | Monday 02 June 2025 20:15:29 +0000 (0:00:01.980) 0:00:16.000 *********** 2025-06-02 20:23:42.310830 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.310840 | orchestrator | 2025-06-02 20:23:42.310851 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 20:23:42.310862 | orchestrator | Monday 02 June 2025 20:15:33 +0000 (0:00:03.681) 0:00:19.681 *********** 2025-06-02 20:23:42.310873 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.310884 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.310894 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.310936 | orchestrator | 2025-06-02 20:23:42.310948 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 20:23:42.310959 | orchestrator | Monday 02 June 2025 20:15:33 +0000 (0:00:00.359) 0:00:20.041 *********** 2025-06-02 20:23:42.310970 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:23:42.310981 | orchestrator | 2025-06-02 20:23:42.310992 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-02 20:23:42.311013 | orchestrator | Monday 02 June 2025 20:16:02 +0000 (0:00:29.279) 0:00:49.321 *********** 2025-06-02 20:23:42.311023 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.311034 | orchestrator | 2025-06-02 20:23:42.311045 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 20:23:42.311056 | orchestrator | Monday 02 June 2025 20:16:18 +0000 (0:00:15.585) 0:01:04.906 *********** 2025-06-02 20:23:42.311066 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:23:42.311156 | orchestrator | 2025-06-02 20:23:42.311168 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 20:23:42.311179 | orchestrator | Monday 02 June 2025 20:16:30 +0000 (0:00:11.957) 0:01:16.863 *********** 2025-06-02 20:23:42.311208 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:23:42.311219 | orchestrator | 2025-06-02 20:23:42.311231 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-02 20:23:42.311241 | orchestrator | Monday 02 June 2025 20:16:31 +0000 (0:00:00.894) 0:01:17.757 *********** 2025-06-02 20:23:42.311252 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.311263 | orchestrator | 2025-06-02 20:23:42.311274 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 20:23:42.311284 | orchestrator | Monday 02 June 2025 20:16:31 +0000 (0:00:00.405) 0:01:18.163 *********** 2025-06-02 20:23:42.311296 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:23:42.311307 | orchestrator | 2025-06-02 20:23:42.311318 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 20:23:42.311329 | orchestrator | Monday 02 June 2025 20:16:32 +0000 (0:00:00.427) 0:01:18.591 *********** 2025-06-02 20:23:42.311339 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:23:42.311350 | orchestrator | 2025-06-02 20:23:42.311361 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 20:23:42.311372 | orchestrator | Monday 02 June 2025 20:16:49 +0000 (0:00:17.482) 0:01:36.073 *********** 2025-06-02 20:23:42.311382 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.311393 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.311404 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.311415 | orchestrator | 2025-06-02 20:23:42.311425 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-02 20:23:42.311436 | orchestrator | 2025-06-02 20:23:42.311447 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 20:23:42.311457 | orchestrator | Monday 02 June 2025 20:16:49 +0000 (0:00:00.271) 0:01:36.345 *********** 2025-06-02 20:23:42.311468 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:23:42.311479 | orchestrator | 2025-06-02 20:23:42.311490 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-02 20:23:42.311500 | orchestrator | Monday 02 June 2025 20:16:50 +0000 (0:00:00.499) 0:01:36.845 *********** 2025-06-02 20:23:42.311511 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.311522 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.311532 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.311543 | orchestrator | 2025-06-02 20:23:42.311553 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-02 20:23:42.311564 | orchestrator | Monday 02 June 2025 20:16:52 +0000 (0:00:02.191) 0:01:39.037 *********** 2025-06-02 20:23:42.311575 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.311586 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.311596 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.311607 | orchestrator | 2025-06-02 20:23:42.311617 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 20:23:42.311634 | orchestrator | Monday 02 June 2025 20:16:54 +0000 (0:00:02.290) 0:01:41.327 *********** 2025-06-02 20:23:42.311645 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.311663 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.311674 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.311701 | orchestrator | 2025-06-02 20:23:42.311723 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 20:23:42.311734 | orchestrator | Monday 02 June 2025 20:16:55 +0000 (0:00:00.301) 0:01:41.629 *********** 2025-06-02 20:23:42.311745 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 20:23:42.311755 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.311766 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 20:23:42.311777 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.311788 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 20:23:42.311807 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-02 20:23:42.311826 | orchestrator | 2025-06-02 20:23:42.311843 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 20:23:42.311861 | orchestrator | Monday 02 June 2025 20:17:03 +0000 (0:00:08.204) 0:01:49.834 *********** 2025-06-02 20:23:42.311879 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.311900 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.311948 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.311961 | orchestrator | 2025-06-02 20:23:42.311972 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 20:23:42.311983 | orchestrator | Monday 02 June 2025 20:17:03 +0000 (0:00:00.302) 0:01:50.136 *********** 2025-06-02 20:23:42.311994 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 20:23:42.312004 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.312015 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 20:23:42.312025 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.312036 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 20:23:42.312049 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.312068 | orchestrator | 2025-06-02 20:23:42.312085 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 20:23:42.312103 | orchestrator | Monday 02 June 2025 20:17:04 +0000 (0:00:00.568) 0:01:50.704 *********** 2025-06-02 20:23:42.312121 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.312138 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.312157 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.312177 | orchestrator | 2025-06-02 20:23:42.312194 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-02 20:23:42.312211 | orchestrator | Monday 02 June 2025 20:17:04 +0000 (0:00:00.452) 0:01:51.156 *********** 2025-06-02 20:23:42.312222 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.312233 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.312244 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.312255 | orchestrator | 2025-06-02 20:23:42.312265 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-02 20:23:42.312276 | orchestrator | Monday 02 June 2025 20:17:05 +0000 (0:00:00.904) 0:01:52.061 *********** 2025-06-02 20:23:42.312287 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.312315 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.312335 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.312349 | orchestrator | 2025-06-02 20:23:42.312360 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-02 20:23:42.312371 | orchestrator | Monday 02 June 2025 20:17:07 +0000 (0:00:01.940) 0:01:54.002 *********** 2025-06-02 20:23:42.312382 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.312398 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.312416 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:23:42.312434 | orchestrator | 2025-06-02 20:23:42.312454 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 20:23:42.312473 | orchestrator | Monday 02 June 2025 20:17:27 +0000 (0:00:20.177) 0:02:14.180 *********** 2025-06-02 20:23:42.312492 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.312523 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.312542 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:23:42.312561 | orchestrator | 2025-06-02 20:23:42.312580 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 20:23:42.312597 | orchestrator | Monday 02 June 2025 20:17:39 +0000 (0:00:12.065) 0:02:26.245 *********** 2025-06-02 20:23:42.312608 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:23:42.312619 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.312630 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.312640 | orchestrator | 2025-06-02 20:23:42.312651 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-02 20:23:42.312662 | orchestrator | Monday 02 June 2025 20:17:40 +0000 (0:00:00.781) 0:02:27.027 *********** 2025-06-02 20:23:42.312673 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.312683 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.312694 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.312704 | orchestrator | 2025-06-02 20:23:42.312715 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-02 20:23:42.312729 | orchestrator | Monday 02 June 2025 20:17:53 +0000 (0:00:12.354) 0:02:39.382 *********** 2025-06-02 20:23:42.312747 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.312776 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.312796 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.312813 | orchestrator | 2025-06-02 20:23:42.312829 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 20:23:42.312846 | orchestrator | Monday 02 June 2025 20:17:54 +0000 (0:00:01.481) 0:02:40.864 *********** 2025-06-02 20:23:42.312862 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.312879 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.312896 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.312942 | orchestrator | 2025-06-02 20:23:42.312959 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-02 20:23:42.312976 | orchestrator | 2025-06-02 20:23:42.312995 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 20:23:42.313010 | orchestrator | Monday 02 June 2025 20:17:54 +0000 (0:00:00.329) 0:02:41.193 *********** 2025-06-02 20:23:42.313030 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:23:42.313043 | orchestrator | 2025-06-02 20:23:42.313053 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-02 20:23:42.313064 | orchestrator | Monday 02 June 2025 20:17:55 +0000 (0:00:00.572) 0:02:41.766 *********** 2025-06-02 20:23:42.313075 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-02 20:23:42.313086 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-02 20:23:42.313096 | orchestrator | 2025-06-02 20:23:42.313107 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-02 20:23:42.313117 | orchestrator | Monday 02 June 2025 20:17:58 +0000 (0:00:03.214) 0:02:44.981 *********** 2025-06-02 20:23:42.313128 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-02 20:23:42.313141 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-02 20:23:42.313151 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-02 20:23:42.313162 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-02 20:23:42.313173 | orchestrator | 2025-06-02 20:23:42.313184 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-02 20:23:42.313194 | orchestrator | Monday 02 June 2025 20:18:05 +0000 (0:00:07.052) 0:02:52.033 *********** 2025-06-02 20:23:42.313205 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 20:23:42.313227 | orchestrator | 2025-06-02 20:23:42.313238 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-02 20:23:42.313249 | orchestrator | Monday 02 June 2025 20:18:08 +0000 (0:00:03.254) 0:02:55.288 *********** 2025-06-02 20:23:42.313259 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 20:23:42.313270 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-02 20:23:42.313281 | orchestrator | 2025-06-02 20:23:42.313291 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-02 20:23:42.313302 | orchestrator | Monday 02 June 2025 20:18:12 +0000 (0:00:04.057) 0:02:59.346 *********** 2025-06-02 20:23:42.313313 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 20:23:42.313323 | orchestrator | 2025-06-02 20:23:42.313334 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-02 20:23:42.313345 | orchestrator | Monday 02 June 2025 20:18:16 +0000 (0:00:03.558) 0:03:02.904 *********** 2025-06-02 20:23:42.313355 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-02 20:23:42.313366 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-02 20:23:42.313376 | orchestrator | 2025-06-02 20:23:42.313387 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 20:23:42.313409 | orchestrator | Monday 02 June 2025 20:18:24 +0000 (0:00:07.954) 0:03:10.858 *********** 2025-06-02 20:23:42.313428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.313452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.313467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.313496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.313511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.313522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.313534 | orchestrator | 2025-06-02 20:23:42.313545 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-02 20:23:42.313556 | orchestrator | Monday 02 June 2025 20:18:25 +0000 (0:00:01.359) 0:03:12.217 *********** 2025-06-02 20:23:42.313567 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.313578 | orchestrator | 2025-06-02 20:23:42.313589 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-02 20:23:42.313599 | orchestrator | Monday 02 June 2025 20:18:25 +0000 (0:00:00.119) 0:03:12.337 *********** 2025-06-02 20:23:42.313610 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.313621 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.313631 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.313642 | orchestrator | 2025-06-02 20:23:42.313658 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-02 20:23:42.313669 | orchestrator | Monday 02 June 2025 20:18:26 +0000 (0:00:00.522) 0:03:12.860 *********** 2025-06-02 20:23:42.313687 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 20:23:42.313697 | orchestrator | 2025-06-02 20:23:42.313708 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-02 20:23:42.313719 | orchestrator | Monday 02 June 2025 20:18:27 +0000 (0:00:00.670) 0:03:13.530 *********** 2025-06-02 20:23:42.313729 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.313740 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.313750 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.313761 | orchestrator | 2025-06-02 20:23:42.313772 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 20:23:42.313782 | orchestrator | Monday 02 June 2025 20:18:27 +0000 (0:00:00.299) 0:03:13.830 *********** 2025-06-02 20:23:42.313793 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:23:42.313804 | orchestrator | 2025-06-02 20:23:42.313814 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 20:23:42.313825 | orchestrator | Monday 02 June 2025 20:18:28 +0000 (0:00:00.806) 0:03:14.636 *********** 2025-06-02 20:23:42.313843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.313857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.313875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.313895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.313936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.313966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.313984 | orchestrator | 2025-06-02 20:23:42.314003 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 20:23:42.314077 | orchestrator | Monday 02 June 2025 20:18:30 +0000 (0:00:02.251) 0:03:16.887 *********** 2025-06-02 20:23:42.314094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:23:42.314129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.314141 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.314153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:23:42.314173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.314185 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.314197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:23:42.314216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.314228 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.314239 | orchestrator | 2025-06-02 20:23:42.314254 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 20:23:42.314265 | orchestrator | Monday 02 June 2025 20:18:31 +0000 (0:00:00.574) 0:03:17.462 *********** 2025-06-02 20:23:42.314277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:23:42.314289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.314300 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.314320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value2025-06-02 20:23:42 | INFO  | Task ce577db7-04a8-45de-ba4c-ffd16e1d3bc7 is in state SUCCESS 2025-06-02 20:23:42.314333 | orchestrator | ': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:23:42.314352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.314363 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.314380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:23:42.314392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.314403 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.314414 | orchestrator | 2025-06-02 20:23:42.314424 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-02 20:23:42.314435 | orchestrator | Monday 02 June 2025 20:18:32 +0000 (0:00:01.041) 0:03:18.503 *********** 2025-06-02 20:23:42.314455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.314480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.314493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.314513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.314526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.314545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.314556 | orchestrator | 2025-06-02 20:23:42.314567 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-02 20:23:42.314578 | orchestrator | Monday 02 June 2025 20:18:34 +0000 (0:00:02.344) 0:03:20.848 *********** 2025-06-02 20:23:42.314594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.314607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.314638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.314657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.314674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.314686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.314697 | orchestrator | 2025-06-02 20:23:42.314708 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-02 20:23:42.314719 | orchestrator | Monday 02 June 2025 20:18:40 +0000 (0:00:05.599) 0:03:26.447 *********** 2025-06-02 20:23:42.314737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:23:42.314757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.314768 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.314780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:23:42.314792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.314803 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.314815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 20:23:42.314871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.314891 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.314935 | orchestrator | 2025-06-02 20:23:42.314949 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-02 20:23:42.314960 | orchestrator | Monday 02 June 2025 20:18:40 +0000 (0:00:00.559) 0:03:27.007 *********** 2025-06-02 20:23:42.314970 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.314981 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:23:42.314991 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:23:42.315002 | orchestrator | 2025-06-02 20:23:42.315013 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-02 20:23:42.315023 | orchestrator | Monday 02 June 2025 20:18:42 +0000 (0:00:01.917) 0:03:28.924 *********** 2025-06-02 20:23:42.315034 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.315045 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.315055 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.315066 | orchestrator | 2025-06-02 20:23:42.315076 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-02 20:23:42.315087 | orchestrator | Monday 02 June 2025 20:18:42 +0000 (0:00:00.281) 0:03:29.205 *********** 2025-06-02 20:23:42.315108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.315121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.315150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 20:23:42.315163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.315180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.315192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.315203 | orchestrator | 2025-06-02 20:23:42.315214 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 20:23:42.315225 | orchestrator | Monday 02 June 2025 20:18:44 +0000 (0:00:01.716) 0:03:30.922 *********** 2025-06-02 20:23:42.315236 | orchestrator | 2025-06-02 20:23:42.315246 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 20:23:42.315257 | orchestrator | Monday 02 June 2025 20:18:44 +0000 (0:00:00.118) 0:03:31.041 *********** 2025-06-02 20:23:42.315267 | orchestrator | 2025-06-02 20:23:42.315278 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 20:23:42.315289 | orchestrator | Monday 02 June 2025 20:18:44 +0000 (0:00:00.115) 0:03:31.156 *********** 2025-06-02 20:23:42.315306 | orchestrator | 2025-06-02 20:23:42.315316 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-02 20:23:42.315327 | orchestrator | Monday 02 June 2025 20:18:45 +0000 (0:00:00.229) 0:03:31.386 *********** 2025-06-02 20:23:42.315337 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.315348 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:23:42.315359 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:23:42.315369 | orchestrator | 2025-06-02 20:23:42.315380 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-02 20:23:42.315390 | orchestrator | Monday 02 June 2025 20:19:10 +0000 (0:00:25.418) 0:03:56.805 *********** 2025-06-02 20:23:42.315401 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:23:42.315412 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:23:42.315422 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.315433 | orchestrator | 2025-06-02 20:23:42.315443 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-02 20:23:42.315454 | orchestrator | 2025-06-02 20:23:42.315465 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 20:23:42.315475 | orchestrator | Monday 02 June 2025 20:19:18 +0000 (0:00:08.343) 0:04:05.149 *********** 2025-06-02 20:23:42.315492 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:23:42.315504 | orchestrator | 2025-06-02 20:23:42.315515 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 20:23:42.315525 | orchestrator | Monday 02 June 2025 20:19:19 +0000 (0:00:00.993) 0:04:06.142 *********** 2025-06-02 20:23:42.315536 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.315546 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.315557 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.315568 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.315578 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.315589 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.315600 | orchestrator | 2025-06-02 20:23:42.315610 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-02 20:23:42.315621 | orchestrator | Monday 02 June 2025 20:19:20 +0000 (0:00:00.728) 0:04:06.871 *********** 2025-06-02 20:23:42.315632 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.315642 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.315653 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.315664 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:23:42.315675 | orchestrator | 2025-06-02 20:23:42.315685 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 20:23:42.315696 | orchestrator | Monday 02 June 2025 20:19:21 +0000 (0:00:00.868) 0:04:07.740 *********** 2025-06-02 20:23:42.315707 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-02 20:23:42.315718 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-02 20:23:42.315729 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-02 20:23:42.315739 | orchestrator | 2025-06-02 20:23:42.315750 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 20:23:42.315761 | orchestrator | Monday 02 June 2025 20:19:22 +0000 (0:00:00.677) 0:04:08.418 *********** 2025-06-02 20:23:42.315771 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-02 20:23:42.315782 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-02 20:23:42.315792 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-02 20:23:42.315803 | orchestrator | 2025-06-02 20:23:42.315814 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 20:23:42.315825 | orchestrator | Monday 02 June 2025 20:19:23 +0000 (0:00:01.179) 0:04:09.597 *********** 2025-06-02 20:23:42.315835 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-02 20:23:42.315846 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.315863 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-02 20:23:42.315874 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.315884 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-02 20:23:42.315895 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.315933 | orchestrator | 2025-06-02 20:23:42.315956 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-02 20:23:42.315968 | orchestrator | Monday 02 June 2025 20:19:23 +0000 (0:00:00.739) 0:04:10.337 *********** 2025-06-02 20:23:42.315979 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:23:42.315990 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:23:42.316000 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.316011 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:23:42.316021 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:23:42.316032 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.316043 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 20:23:42.316053 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 20:23:42.316064 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 20:23:42.316075 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 20:23:42.316085 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.316096 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 20:23:42.316107 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 20:23:42.316118 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 20:23:42.316128 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 20:23:42.316139 | orchestrator | 2025-06-02 20:23:42.316150 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-02 20:23:42.316160 | orchestrator | Monday 02 June 2025 20:19:26 +0000 (0:00:02.114) 0:04:12.451 *********** 2025-06-02 20:23:42.316171 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.316181 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.316192 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.316203 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.316213 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.316223 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.316234 | orchestrator | 2025-06-02 20:23:42.316245 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-02 20:23:42.316255 | orchestrator | Monday 02 June 2025 20:19:27 +0000 (0:00:01.319) 0:04:13.770 *********** 2025-06-02 20:23:42.316266 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.316277 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.316287 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.316298 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.316308 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.316319 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.316329 | orchestrator | 2025-06-02 20:23:42.316341 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 20:23:42.316358 | orchestrator | Monday 02 June 2025 20:19:28 +0000 (0:00:01.494) 0:04:15.265 *********** 2025-06-02 20:23:42.316372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316486 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316554 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316629 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316652 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316691 | orchestrator | 2025-06-02 20:23:42.316702 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 20:23:42.316713 | orchestrator | Monday 02 June 2025 20:19:31 +0000 (0:00:02.139) 0:04:17.404 *********** 2025-06-02 20:23:42.316724 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:23:42.316736 | orchestrator | 2025-06-02 20:23:42.316747 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 20:23:42.316758 | orchestrator | Monday 02 June 2025 20:19:32 +0000 (0:00:01.049) 0:04:18.454 *********** 2025-06-02 20:23:42.316769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316889 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.316984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.317011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.317023 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.317034 | orchestrator | 2025-06-02 20:23:42.317045 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 20:23:42.317056 | orchestrator | Monday 02 June 2025 20:19:35 +0000 (0:00:03.124) 0:04:21.579 *********** 2025-06-02 20:23:42.317072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.317084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.317095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317112 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.317131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.317143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.317154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317165 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.317181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.317193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.317218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317229 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.317241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:23:42.317252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317263 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.317274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:23:42.317290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317302 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.317312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:23:42.317330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317341 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.317352 | orchestrator | 2025-06-02 20:23:42.317363 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 20:23:42.317374 | orchestrator | Monday 02 June 2025 20:19:36 +0000 (0:00:01.731) 0:04:23.310 *********** 2025-06-02 20:23:42.317393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.317406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.317422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317434 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.317445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.317462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.317481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317493 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.317504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:23:42.317515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317526 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.317541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.317559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.317571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317582 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.317708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:23:42.317723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317735 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.317746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:23:42.317762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.317782 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.317793 | orchestrator | 2025-06-02 20:23:42.317804 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 20:23:42.317815 | orchestrator | Monday 02 June 2025 20:19:39 +0000 (0:00:02.174) 0:04:25.485 *********** 2025-06-02 20:23:42.317826 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.317837 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.317847 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.317858 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 20:23:42.317869 | orchestrator | 2025-06-02 20:23:42.317880 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-02 20:23:42.317891 | orchestrator | Monday 02 June 2025 20:19:39 +0000 (0:00:00.823) 0:04:26.308 *********** 2025-06-02 20:23:42.317926 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 20:23:42.317946 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 20:23:42.317960 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 20:23:42.317970 | orchestrator | 2025-06-02 20:23:42.317981 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-02 20:23:42.317992 | orchestrator | Monday 02 June 2025 20:19:40 +0000 (0:00:00.920) 0:04:27.229 *********** 2025-06-02 20:23:42.318002 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 20:23:42.318013 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 20:23:42.318083 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 20:23:42.318095 | orchestrator | 2025-06-02 20:23:42.318106 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-02 20:23:42.318117 | orchestrator | Monday 02 June 2025 20:19:41 +0000 (0:00:00.826) 0:04:28.055 *********** 2025-06-02 20:23:42.318128 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:23:42.318139 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:23:42.318150 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:23:42.318161 | orchestrator | 2025-06-02 20:23:42.318172 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-02 20:23:42.318183 | orchestrator | Monday 02 June 2025 20:19:42 +0000 (0:00:00.437) 0:04:28.493 *********** 2025-06-02 20:23:42.318193 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:23:42.318204 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:23:42.318215 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:23:42.318226 | orchestrator | 2025-06-02 20:23:42.318236 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-02 20:23:42.318248 | orchestrator | Monday 02 June 2025 20:19:42 +0000 (0:00:00.452) 0:04:28.946 *********** 2025-06-02 20:23:42.318259 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 20:23:42.318278 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 20:23:42.318290 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 20:23:42.318301 | orchestrator | 2025-06-02 20:23:42.318312 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-02 20:23:42.318323 | orchestrator | Monday 02 June 2025 20:19:43 +0000 (0:00:01.345) 0:04:30.291 *********** 2025-06-02 20:23:42.318334 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 20:23:42.318344 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 20:23:42.318355 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 20:23:42.318366 | orchestrator | 2025-06-02 20:23:42.318377 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-02 20:23:42.318388 | orchestrator | Monday 02 June 2025 20:19:45 +0000 (0:00:01.239) 0:04:31.531 *********** 2025-06-02 20:23:42.318399 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 20:23:42.318421 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 20:23:42.318432 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 20:23:42.318443 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-02 20:23:42.318453 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-02 20:23:42.318464 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-02 20:23:42.318475 | orchestrator | 2025-06-02 20:23:42.318485 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-02 20:23:42.318496 | orchestrator | Monday 02 June 2025 20:19:48 +0000 (0:00:03.633) 0:04:35.165 *********** 2025-06-02 20:23:42.318507 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.318518 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.318528 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.318539 | orchestrator | 2025-06-02 20:23:42.318550 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-02 20:23:42.318561 | orchestrator | Monday 02 June 2025 20:19:49 +0000 (0:00:00.321) 0:04:35.486 *********** 2025-06-02 20:23:42.318572 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.318583 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.318593 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.318604 | orchestrator | 2025-06-02 20:23:42.318615 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-02 20:23:42.318626 | orchestrator | Monday 02 June 2025 20:19:49 +0000 (0:00:00.318) 0:04:35.804 *********** 2025-06-02 20:23:42.318637 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.318648 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.318658 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.318669 | orchestrator | 2025-06-02 20:23:42.318689 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-02 20:23:42.318700 | orchestrator | Monday 02 June 2025 20:19:50 +0000 (0:00:01.503) 0:04:37.308 *********** 2025-06-02 20:23:42.318711 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 20:23:42.318723 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 20:23:42.318734 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 20:23:42.318745 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 20:23:42.318756 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 20:23:42.318767 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 20:23:42.318777 | orchestrator | 2025-06-02 20:23:42.318788 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-02 20:23:42.318799 | orchestrator | Monday 02 June 2025 20:19:54 +0000 (0:00:03.152) 0:04:40.461 *********** 2025-06-02 20:23:42.318810 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:23:42.318821 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:23:42.318832 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:23:42.318842 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 20:23:42.318853 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.318864 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 20:23:42.318874 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.318885 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 20:23:42.318899 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.318992 | orchestrator | 2025-06-02 20:23:42.319012 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-02 20:23:42.319023 | orchestrator | Monday 02 June 2025 20:19:57 +0000 (0:00:03.259) 0:04:43.721 *********** 2025-06-02 20:23:42.319034 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.319044 | orchestrator | 2025-06-02 20:23:42.319055 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-02 20:23:42.319066 | orchestrator | Monday 02 June 2025 20:19:57 +0000 (0:00:00.126) 0:04:43.848 *********** 2025-06-02 20:23:42.319077 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.319088 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.319098 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.319109 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.319120 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.319130 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.319141 | orchestrator | 2025-06-02 20:23:42.319152 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-02 20:23:42.319169 | orchestrator | Monday 02 June 2025 20:19:58 +0000 (0:00:00.760) 0:04:44.608 *********** 2025-06-02 20:23:42.319180 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 20:23:42.319191 | orchestrator | 2025-06-02 20:23:42.319201 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-02 20:23:42.319212 | orchestrator | Monday 02 June 2025 20:19:58 +0000 (0:00:00.683) 0:04:45.292 *********** 2025-06-02 20:23:42.319223 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.319233 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.319244 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.319255 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.319265 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.319276 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.319286 | orchestrator | 2025-06-02 20:23:42.319297 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-02 20:23:42.319308 | orchestrator | Monday 02 June 2025 20:19:59 +0000 (0:00:00.594) 0:04:45.887 *********** 2025-06-02 20:23:42.319319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319398 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319489 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319500 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319516 | orchestrator | 2025-06-02 20:23:42.319526 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-02 20:23:42.319535 | orchestrator | Monday 02 June 2025 20:20:03 +0000 (0:00:03.995) 0:04:49.883 *********** 2025-06-02 20:23:42.319546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.319561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.319571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.319586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.319602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.319612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.319627 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.319739 | orchestrator | 2025-06-02 20:23:42.319748 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-02 20:23:42.319758 | orchestrator | Monday 02 June 2025 20:20:09 +0000 (0:00:05.583) 0:04:55.466 *********** 2025-06-02 20:23:42.319774 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.319783 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.319793 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.319805 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.319822 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.319832 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.319842 | orchestrator | 2025-06-02 20:23:42.319851 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-02 20:23:42.319865 | orchestrator | Monday 02 June 2025 20:20:10 +0000 (0:00:01.193) 0:04:56.660 *********** 2025-06-02 20:23:42.319875 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 20:23:42.319885 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 20:23:42.319894 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 20:23:42.319922 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 20:23:42.319932 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.319941 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 20:23:42.319951 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 20:23:42.319961 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.319970 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 20:23:42.319980 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.319989 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 20:23:42.319998 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 20:23:42.320008 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 20:23:42.320024 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 20:23:42.320042 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 20:23:42.320060 | orchestrator | 2025-06-02 20:23:42.320076 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-02 20:23:42.320092 | orchestrator | Monday 02 June 2025 20:20:13 +0000 (0:00:03.280) 0:04:59.940 *********** 2025-06-02 20:23:42.320107 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.320122 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.320138 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.320155 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.320172 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.320189 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.320205 | orchestrator | 2025-06-02 20:23:42.320222 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-02 20:23:42.320239 | orchestrator | Monday 02 June 2025 20:20:14 +0000 (0:00:00.756) 0:05:00.697 *********** 2025-06-02 20:23:42.320255 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 20:23:42.320272 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 20:23:42.320299 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 20:23:42.320318 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 20:23:42.320336 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 20:23:42.320355 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 20:23:42.320388 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 20:23:42.320406 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 20:23:42.320423 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 20:23:42.320438 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 20:23:42.320448 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.320458 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 20:23:42.320467 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.320477 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 20:23:42.320486 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.320495 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:23:42.320505 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:23:42.320514 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:23:42.320524 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:23:42.320533 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:23:42.320553 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 20:23:42.320563 | orchestrator | 2025-06-02 20:23:42.320572 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-02 20:23:42.320582 | orchestrator | Monday 02 June 2025 20:20:19 +0000 (0:00:04.874) 0:05:05.571 *********** 2025-06-02 20:23:42.320591 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:23:42.320601 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:23:42.320610 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 20:23:42.320619 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:23:42.320629 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:23:42.320638 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 20:23:42.320653 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 20:23:42.320671 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 20:23:42.320694 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 20:23:42.320710 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:23:42.320725 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:23:42.320741 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 20:23:42.320757 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 20:23:42.320771 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.320786 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:23:42.320802 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 20:23:42.320830 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 20:23:42.320846 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:23:42.320862 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.320878 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.320894 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 20:23:42.320939 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:23:42.320949 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:23:42.320968 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 20:23:42.320978 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:23:42.320987 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:23:42.320996 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 20:23:42.321006 | orchestrator | 2025-06-02 20:23:42.321015 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-02 20:23:42.321025 | orchestrator | Monday 02 June 2025 20:20:25 +0000 (0:00:06.689) 0:05:12.260 *********** 2025-06-02 20:23:42.321034 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.321043 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.321053 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.321062 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.321071 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.321080 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.321090 | orchestrator | 2025-06-02 20:23:42.321099 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-02 20:23:42.321109 | orchestrator | Monday 02 June 2025 20:20:26 +0000 (0:00:00.546) 0:05:12.807 *********** 2025-06-02 20:23:42.321118 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.321127 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.321137 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.321146 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.321155 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.321165 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.321174 | orchestrator | 2025-06-02 20:23:42.321183 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-02 20:23:42.321193 | orchestrator | Monday 02 June 2025 20:20:27 +0000 (0:00:00.760) 0:05:13.567 *********** 2025-06-02 20:23:42.321202 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.321211 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.321221 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.321230 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.321239 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.321248 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.321258 | orchestrator | 2025-06-02 20:23:42.321267 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-02 20:23:42.321277 | orchestrator | Monday 02 June 2025 20:20:28 +0000 (0:00:01.715) 0:05:15.283 *********** 2025-06-02 20:23:42.321293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.321312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.321322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.321333 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.321348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.321359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.321375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.321392 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.321402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 20:23:42.321412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 20:23:42.321429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.321440 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.321450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:23:42.321460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.321470 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.321485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:23:42.321501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.321511 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.321521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 20:23:42.321544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 20:23:42.321561 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.321577 | orchestrator | 2025-06-02 20:23:42.321593 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-02 20:23:42.321608 | orchestrator | Monday 02 June 2025 20:20:30 +0000 (0:00:01.582) 0:05:16.866 *********** 2025-06-02 20:23:42.321624 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 20:23:42.321641 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 20:23:42.321658 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.321674 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 20:23:42.321691 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 20:23:42.321701 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.321711 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 20:23:42.321720 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 20:23:42.321730 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.321739 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 20:23:42.321749 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 20:23:42.321758 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.321767 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 20:23:42.321777 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 20:23:42.321786 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.321804 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 20:23:42.321813 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 20:23:42.321823 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.321832 | orchestrator | 2025-06-02 20:23:42.321842 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-02 20:23:42.321851 | orchestrator | Monday 02 June 2025 20:20:31 +0000 (0:00:00.624) 0:05:17.491 *********** 2025-06-02 20:23:42.321866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.321878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.321895 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 20:23:42.321933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.321951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.321981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.321995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.322081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 20:23:42.322106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 20:23:42.322135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.322152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.322182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.322207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.322229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.322248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 20:23:42.322267 | orchestrator | 2025-06-02 20:23:42.322280 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 20:23:42.322300 | orchestrator | Monday 02 June 2025 20:20:34 +0000 (0:00:02.904) 0:05:20.395 *********** 2025-06-02 20:23:42.322324 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.322339 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.322355 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.322379 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.322395 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.322410 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.322427 | orchestrator | 2025-06-02 20:23:42.322443 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:23:42.322460 | orchestrator | Monday 02 June 2025 20:20:34 +0000 (0:00:00.571) 0:05:20.967 *********** 2025-06-02 20:23:42.322480 | orchestrator | 2025-06-02 20:23:42.322490 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:23:42.322500 | orchestrator | Monday 02 June 2025 20:20:34 +0000 (0:00:00.308) 0:05:21.275 *********** 2025-06-02 20:23:42.322509 | orchestrator | 2025-06-02 20:23:42.322519 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:23:42.322528 | orchestrator | Monday 02 June 2025 20:20:35 +0000 (0:00:00.129) 0:05:21.405 *********** 2025-06-02 20:23:42.322538 | orchestrator | 2025-06-02 20:23:42.322547 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:23:42.322556 | orchestrator | Monday 02 June 2025 20:20:35 +0000 (0:00:00.133) 0:05:21.538 *********** 2025-06-02 20:23:42.322566 | orchestrator | 2025-06-02 20:23:42.322575 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:23:42.322585 | orchestrator | Monday 02 June 2025 20:20:35 +0000 (0:00:00.129) 0:05:21.667 *********** 2025-06-02 20:23:42.322594 | orchestrator | 2025-06-02 20:23:42.322604 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 20:23:42.322613 | orchestrator | Monday 02 June 2025 20:20:35 +0000 (0:00:00.136) 0:05:21.803 *********** 2025-06-02 20:23:42.322622 | orchestrator | 2025-06-02 20:23:42.322632 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-02 20:23:42.322641 | orchestrator | Monday 02 June 2025 20:20:35 +0000 (0:00:00.123) 0:05:21.927 *********** 2025-06-02 20:23:42.322651 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.322660 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:23:42.322670 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:23:42.322679 | orchestrator | 2025-06-02 20:23:42.322689 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-02 20:23:42.322698 | orchestrator | Monday 02 June 2025 20:20:47 +0000 (0:00:12.072) 0:05:34.000 *********** 2025-06-02 20:23:42.322708 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.322717 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:23:42.322726 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:23:42.322736 | orchestrator | 2025-06-02 20:23:42.322745 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-02 20:23:42.322755 | orchestrator | Monday 02 June 2025 20:21:04 +0000 (0:00:16.409) 0:05:50.409 *********** 2025-06-02 20:23:42.322764 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.322773 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.322793 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.322803 | orchestrator | 2025-06-02 20:23:42.322812 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-02 20:23:42.322822 | orchestrator | Monday 02 June 2025 20:21:23 +0000 (0:00:19.083) 0:06:09.492 *********** 2025-06-02 20:23:42.322831 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.322841 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.322850 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.322859 | orchestrator | 2025-06-02 20:23:42.322869 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-02 20:23:42.322878 | orchestrator | Monday 02 June 2025 20:21:59 +0000 (0:00:35.903) 0:06:45.395 *********** 2025-06-02 20:23:42.322888 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.322897 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-06-02 20:23:42.322971 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-06-02 20:23:42.322982 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.322991 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.323001 | orchestrator | 2025-06-02 20:23:42.323010 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-02 20:23:42.323020 | orchestrator | Monday 02 June 2025 20:22:05 +0000 (0:00:06.532) 0:06:51.928 *********** 2025-06-02 20:23:42.323037 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.323047 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.323056 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.323065 | orchestrator | 2025-06-02 20:23:42.323075 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-02 20:23:42.323085 | orchestrator | Monday 02 June 2025 20:22:06 +0000 (0:00:00.818) 0:06:52.747 *********** 2025-06-02 20:23:42.323094 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:23:42.323104 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:23:42.323113 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:23:42.323122 | orchestrator | 2025-06-02 20:23:42.323132 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-02 20:23:42.323141 | orchestrator | Monday 02 June 2025 20:22:33 +0000 (0:00:27.541) 0:07:20.288 *********** 2025-06-02 20:23:42.323151 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.323160 | orchestrator | 2025-06-02 20:23:42.323170 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-02 20:23:42.323179 | orchestrator | Monday 02 June 2025 20:22:34 +0000 (0:00:00.117) 0:07:20.405 *********** 2025-06-02 20:23:42.323189 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.323199 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.323208 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.323218 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.323227 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.323237 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-02 20:23:42.323247 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:23:42.323256 | orchestrator | 2025-06-02 20:23:42.323280 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-02 20:23:42.323298 | orchestrator | Monday 02 June 2025 20:22:55 +0000 (0:00:21.412) 0:07:41.818 *********** 2025-06-02 20:23:42.323318 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.323328 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.323338 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.323347 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.323357 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.323368 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.323385 | orchestrator | 2025-06-02 20:23:42.323401 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-02 20:23:42.323414 | orchestrator | Monday 02 June 2025 20:23:05 +0000 (0:00:10.208) 0:07:52.026 *********** 2025-06-02 20:23:42.323427 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.323439 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.323451 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.323464 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.323476 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.323489 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-06-02 20:23:42.323502 | orchestrator | 2025-06-02 20:23:42.323514 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 20:23:42.323526 | orchestrator | Monday 02 June 2025 20:23:10 +0000 (0:00:04.405) 0:07:56.432 *********** 2025-06-02 20:23:42.323539 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:23:42.323553 | orchestrator | 2025-06-02 20:23:42.323567 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 20:23:42.323580 | orchestrator | Monday 02 June 2025 20:23:22 +0000 (0:00:11.967) 0:08:08.400 *********** 2025-06-02 20:23:42.323594 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:23:42.323603 | orchestrator | 2025-06-02 20:23:42.323611 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-02 20:23:42.323619 | orchestrator | Monday 02 June 2025 20:23:23 +0000 (0:00:01.329) 0:08:09.729 *********** 2025-06-02 20:23:42.323634 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.323642 | orchestrator | 2025-06-02 20:23:42.323649 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-02 20:23:42.323657 | orchestrator | Monday 02 June 2025 20:23:24 +0000 (0:00:01.307) 0:08:11.037 *********** 2025-06-02 20:23:42.323665 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:23:42.323672 | orchestrator | 2025-06-02 20:23:42.323680 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-02 20:23:42.323688 | orchestrator | Monday 02 June 2025 20:23:35 +0000 (0:00:10.796) 0:08:21.834 *********** 2025-06-02 20:23:42.323696 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:23:42.323703 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:23:42.323711 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:23:42.323725 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:23:42.323732 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:23:42.323740 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:23:42.323748 | orchestrator | 2025-06-02 20:23:42.323756 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-02 20:23:42.323763 | orchestrator | 2025-06-02 20:23:42.323771 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-02 20:23:42.323779 | orchestrator | Monday 02 June 2025 20:23:37 +0000 (0:00:01.656) 0:08:23.491 *********** 2025-06-02 20:23:42.323787 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:23:42.323794 | orchestrator | changed: [testbed-node-1] 2025-06-02 20:23:42.323802 | orchestrator | changed: [testbed-node-2] 2025-06-02 20:23:42.323810 | orchestrator | 2025-06-02 20:23:42.323818 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-02 20:23:42.323825 | orchestrator | 2025-06-02 20:23:42.323833 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-02 20:23:42.323841 | orchestrator | Monday 02 June 2025 20:23:38 +0000 (0:00:01.099) 0:08:24.590 *********** 2025-06-02 20:23:42.323848 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.323856 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.323864 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.323872 | orchestrator | 2025-06-02 20:23:42.323880 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-02 20:23:42.323888 | orchestrator | 2025-06-02 20:23:42.323895 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-02 20:23:42.323922 | orchestrator | Monday 02 June 2025 20:23:38 +0000 (0:00:00.493) 0:08:25.083 *********** 2025-06-02 20:23:42.323931 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-02 20:23:42.323939 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 20:23:42.323947 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 20:23:42.323954 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-02 20:23:42.323962 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-02 20:23:42.323969 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-02 20:23:42.323977 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:23:42.323985 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-02 20:23:42.323992 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 20:23:42.324000 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 20:23:42.324008 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-02 20:23:42.324015 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-02 20:23:42.324023 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-02 20:23:42.324030 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:23:42.324038 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-02 20:23:42.324046 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 20:23:42.324054 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 20:23:42.324066 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-02 20:23:42.324081 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-02 20:23:42.324089 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-02 20:23:42.324097 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:23:42.324105 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-02 20:23:42.324113 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 20:23:42.324120 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 20:23:42.324128 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-02 20:23:42.324136 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-02 20:23:42.324144 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-02 20:23:42.324151 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.324159 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-02 20:23:42.324167 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 20:23:42.324175 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 20:23:42.324182 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-02 20:23:42.324190 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-02 20:23:42.324198 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-02 20:23:42.324206 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.324213 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-02 20:23:42.324221 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 20:23:42.324229 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 20:23:42.324236 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-02 20:23:42.324244 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-02 20:23:42.324252 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-02 20:23:42.324260 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.324267 | orchestrator | 2025-06-02 20:23:42.324275 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-02 20:23:42.324283 | orchestrator | 2025-06-02 20:23:42.324290 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-02 20:23:42.324298 | orchestrator | Monday 02 June 2025 20:23:39 +0000 (0:00:01.265) 0:08:26.349 *********** 2025-06-02 20:23:42.324306 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-02 20:23:42.324314 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-02 20:23:42.324321 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.324333 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-02 20:23:42.324341 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-02 20:23:42.324349 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.324357 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-02 20:23:42.324368 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-02 20:23:42.324381 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.324395 | orchestrator | 2025-06-02 20:23:42.324408 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-02 20:23:42.324421 | orchestrator | 2025-06-02 20:23:42.324436 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-02 20:23:42.324449 | orchestrator | Monday 02 June 2025 20:23:40 +0000 (0:00:00.672) 0:08:27.021 *********** 2025-06-02 20:23:42.324464 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.324472 | orchestrator | 2025-06-02 20:23:42.324480 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-02 20:23:42.324488 | orchestrator | 2025-06-02 20:23:42.324502 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-02 20:23:42.324510 | orchestrator | Monday 02 June 2025 20:23:41 +0000 (0:00:00.632) 0:08:27.653 *********** 2025-06-02 20:23:42.324517 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:23:42.324525 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:23:42.324533 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:23:42.324541 | orchestrator | 2025-06-02 20:23:42.324548 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:23:42.324556 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:23:42.324566 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-02 20:23:42.324575 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 20:23:42.324583 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 20:23:42.324591 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-02 20:23:42.324598 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-06-02 20:23:42.324606 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-02 20:23:42.324614 | orchestrator | 2025-06-02 20:23:42.324622 | orchestrator | 2025-06-02 20:23:42.324630 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:23:42.324643 | orchestrator | Monday 02 June 2025 20:23:41 +0000 (0:00:00.404) 0:08:28.057 *********** 2025-06-02 20:23:42.324651 | orchestrator | =============================================================================== 2025-06-02 20:23:42.324659 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 35.90s 2025-06-02 20:23:42.324667 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.28s 2025-06-02 20:23:42.324675 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.54s 2025-06-02 20:23:42.324682 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 25.42s 2025-06-02 20:23:42.324690 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.41s 2025-06-02 20:23:42.324698 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.18s 2025-06-02 20:23:42.324705 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.08s 2025-06-02 20:23:42.324713 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.48s 2025-06-02 20:23:42.324720 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.41s 2025-06-02 20:23:42.324728 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.59s 2025-06-02 20:23:42.324736 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.36s 2025-06-02 20:23:42.324743 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.07s 2025-06-02 20:23:42.324751 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.07s 2025-06-02 20:23:42.324759 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.97s 2025-06-02 20:23:42.324767 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.96s 2025-06-02 20:23:42.324774 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.80s 2025-06-02 20:23:42.324782 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.21s 2025-06-02 20:23:42.324797 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.34s 2025-06-02 20:23:42.324811 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.20s 2025-06-02 20:23:42.324825 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.95s 2025-06-02 20:23:42.324838 | orchestrator | 2025-06-02 20:23:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:23:45.353860 | orchestrator | 2025-06-02 20:23:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:23:48.397050 | orchestrator | 2025-06-02 20:23:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:23:51.440722 | orchestrator | 2025-06-02 20:23:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:23:54.483031 | orchestrator | 2025-06-02 20:23:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:23:57.522973 | orchestrator | 2025-06-02 20:23:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:00.561605 | orchestrator | 2025-06-02 20:24:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:03.605065 | orchestrator | 2025-06-02 20:24:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:06.647022 | orchestrator | 2025-06-02 20:24:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:09.685417 | orchestrator | 2025-06-02 20:24:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:12.728187 | orchestrator | 2025-06-02 20:24:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:15.767435 | orchestrator | 2025-06-02 20:24:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:18.807383 | orchestrator | 2025-06-02 20:24:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:21.844522 | orchestrator | 2025-06-02 20:24:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:24.888387 | orchestrator | 2025-06-02 20:24:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:27.929008 | orchestrator | 2025-06-02 20:24:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:30.971663 | orchestrator | 2025-06-02 20:24:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:34.020399 | orchestrator | 2025-06-02 20:24:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:37.060569 | orchestrator | 2025-06-02 20:24:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:40.094075 | orchestrator | 2025-06-02 20:24:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 20:24:43.130186 | orchestrator | 2025-06-02 20:24:43.368993 | orchestrator | 2025-06-02 20:24:43.372249 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Jun 2 20:24:43 UTC 2025 2025-06-02 20:24:43.372351 | orchestrator | 2025-06-02 20:24:43.699152 | orchestrator | ok: Runtime: 0:34:22.473195 2025-06-02 20:24:43.950646 | 2025-06-02 20:24:43.950901 | TASK [Bootstrap services] 2025-06-02 20:24:44.714957 | orchestrator | 2025-06-02 20:24:44.715143 | orchestrator | # BOOTSTRAP 2025-06-02 20:24:44.715156 | orchestrator | 2025-06-02 20:24:44.715165 | orchestrator | + set -e 2025-06-02 20:24:44.715173 | orchestrator | + echo 2025-06-02 20:24:44.715181 | orchestrator | + echo '# BOOTSTRAP' 2025-06-02 20:24:44.715192 | orchestrator | + echo 2025-06-02 20:24:44.715224 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-02 20:24:44.720234 | orchestrator | + set -e 2025-06-02 20:24:44.720312 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-02 20:24:48.336264 | orchestrator | 2025-06-02 20:24:48 | INFO  | It takes a moment until task 592e6f89-c0ca-44d4-89e6-3c1297f92eac (flavor-manager) has been started and output is visible here. 2025-06-02 20:24:52.243307 | orchestrator | 2025-06-02 20:24:52 | INFO  | Flavor SCS-1V-4 created 2025-06-02 20:24:52.563577 | orchestrator | 2025-06-02 20:24:52 | INFO  | Flavor SCS-2V-8 created 2025-06-02 20:24:52.981573 | orchestrator | 2025-06-02 20:24:52 | INFO  | Flavor SCS-4V-16 created 2025-06-02 20:24:53.151363 | orchestrator | 2025-06-02 20:24:53 | INFO  | Flavor SCS-8V-32 created 2025-06-02 20:24:53.291535 | orchestrator | 2025-06-02 20:24:53 | INFO  | Flavor SCS-1V-2 created 2025-06-02 20:24:53.442972 | orchestrator | 2025-06-02 20:24:53 | INFO  | Flavor SCS-2V-4 created 2025-06-02 20:24:53.590550 | orchestrator | 2025-06-02 20:24:53 | INFO  | Flavor SCS-4V-8 created 2025-06-02 20:24:53.740472 | orchestrator | 2025-06-02 20:24:53 | INFO  | Flavor SCS-8V-16 created 2025-06-02 20:24:53.885374 | orchestrator | 2025-06-02 20:24:53 | INFO  | Flavor SCS-16V-32 created 2025-06-02 20:24:54.025200 | orchestrator | 2025-06-02 20:24:54 | INFO  | Flavor SCS-1V-8 created 2025-06-02 20:24:54.154943 | orchestrator | 2025-06-02 20:24:54 | INFO  | Flavor SCS-2V-16 created 2025-06-02 20:24:54.315557 | orchestrator | 2025-06-02 20:24:54 | INFO  | Flavor SCS-4V-32 created 2025-06-02 20:24:54.440850 | orchestrator | 2025-06-02 20:24:54 | INFO  | Flavor SCS-1L-1 created 2025-06-02 20:24:54.588581 | orchestrator | 2025-06-02 20:24:54 | INFO  | Flavor SCS-2V-4-20s created 2025-06-02 20:24:54.753177 | orchestrator | 2025-06-02 20:24:54 | INFO  | Flavor SCS-4V-16-100s created 2025-06-02 20:24:54.900544 | orchestrator | 2025-06-02 20:24:54 | INFO  | Flavor SCS-1V-4-10 created 2025-06-02 20:24:55.041071 | orchestrator | 2025-06-02 20:24:55 | INFO  | Flavor SCS-2V-8-20 created 2025-06-02 20:24:55.186318 | orchestrator | 2025-06-02 20:24:55 | INFO  | Flavor SCS-4V-16-50 created 2025-06-02 20:24:55.334532 | orchestrator | 2025-06-02 20:24:55 | INFO  | Flavor SCS-8V-32-100 created 2025-06-02 20:24:55.470224 | orchestrator | 2025-06-02 20:24:55 | INFO  | Flavor SCS-1V-2-5 created 2025-06-02 20:24:55.614634 | orchestrator | 2025-06-02 20:24:55 | INFO  | Flavor SCS-2V-4-10 created 2025-06-02 20:24:55.750481 | orchestrator | 2025-06-02 20:24:55 | INFO  | Flavor SCS-4V-8-20 created 2025-06-02 20:24:55.888758 | orchestrator | 2025-06-02 20:24:55 | INFO  | Flavor SCS-8V-16-50 created 2025-06-02 20:24:56.033618 | orchestrator | 2025-06-02 20:24:56 | INFO  | Flavor SCS-16V-32-100 created 2025-06-02 20:24:56.166465 | orchestrator | 2025-06-02 20:24:56 | INFO  | Flavor SCS-1V-8-20 created 2025-06-02 20:24:56.345311 | orchestrator | 2025-06-02 20:24:56 | INFO  | Flavor SCS-2V-16-50 created 2025-06-02 20:24:56.499608 | orchestrator | 2025-06-02 20:24:56 | INFO  | Flavor SCS-4V-32-100 created 2025-06-02 20:24:56.632420 | orchestrator | 2025-06-02 20:24:56 | INFO  | Flavor SCS-1L-1-5 created 2025-06-02 20:24:58.862991 | orchestrator | 2025-06-02 20:24:58 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-02 20:24:58.867933 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:24:58.868004 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:24:58.868042 | orchestrator | Registering Redlock._release_script 2025-06-02 20:24:58.928398 | orchestrator | 2025-06-02 20:24:58 | INFO  | Task 7e509584-c49e-4f90-a2a5-7847ce8c21b3 (bootstrap-basic) was prepared for execution. 2025-06-02 20:24:58.928530 | orchestrator | 2025-06-02 20:24:58 | INFO  | It takes a moment until task 7e509584-c49e-4f90-a2a5-7847ce8c21b3 (bootstrap-basic) has been started and output is visible here. 2025-06-02 20:25:03.216328 | orchestrator | 2025-06-02 20:25:03.216420 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-02 20:25:03.217293 | orchestrator | 2025-06-02 20:25:03.217741 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 20:25:03.218473 | orchestrator | Monday 02 June 2025 20:25:03 +0000 (0:00:00.085) 0:00:00.085 *********** 2025-06-02 20:25:05.064475 | orchestrator | ok: [localhost] 2025-06-02 20:25:05.064614 | orchestrator | 2025-06-02 20:25:05.065164 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-02 20:25:05.066118 | orchestrator | Monday 02 June 2025 20:25:05 +0000 (0:00:01.854) 0:00:01.940 *********** 2025-06-02 20:25:13.039238 | orchestrator | ok: [localhost] 2025-06-02 20:25:13.039343 | orchestrator | 2025-06-02 20:25:13.043595 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-02 20:25:13.043868 | orchestrator | Monday 02 June 2025 20:25:13 +0000 (0:00:07.966) 0:00:09.907 *********** 2025-06-02 20:25:20.456678 | orchestrator | changed: [localhost] 2025-06-02 20:25:20.457401 | orchestrator | 2025-06-02 20:25:20.458262 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-02 20:25:20.458603 | orchestrator | Monday 02 June 2025 20:25:20 +0000 (0:00:07.423) 0:00:17.331 *********** 2025-06-02 20:25:27.463454 | orchestrator | ok: [localhost] 2025-06-02 20:25:27.463567 | orchestrator | 2025-06-02 20:25:27.465270 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-02 20:25:27.466491 | orchestrator | Monday 02 June 2025 20:25:27 +0000 (0:00:07.006) 0:00:24.338 *********** 2025-06-02 20:25:33.377396 | orchestrator | changed: [localhost] 2025-06-02 20:25:33.378539 | orchestrator | 2025-06-02 20:25:33.378875 | orchestrator | TASK [Create public network] *************************************************** 2025-06-02 20:25:33.379718 | orchestrator | Monday 02 June 2025 20:25:33 +0000 (0:00:05.913) 0:00:30.251 *********** 2025-06-02 20:25:40.523348 | orchestrator | changed: [localhost] 2025-06-02 20:25:40.523444 | orchestrator | 2025-06-02 20:25:40.524005 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-02 20:25:40.524668 | orchestrator | Monday 02 June 2025 20:25:40 +0000 (0:00:07.147) 0:00:37.399 *********** 2025-06-02 20:25:47.381670 | orchestrator | changed: [localhost] 2025-06-02 20:25:47.381858 | orchestrator | 2025-06-02 20:25:47.382662 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-02 20:25:47.385016 | orchestrator | Monday 02 June 2025 20:25:47 +0000 (0:00:06.853) 0:00:44.252 *********** 2025-06-02 20:25:52.043123 | orchestrator | changed: [localhost] 2025-06-02 20:25:52.043303 | orchestrator | 2025-06-02 20:25:52.043854 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-02 20:25:52.043958 | orchestrator | Monday 02 June 2025 20:25:52 +0000 (0:00:04.665) 0:00:48.918 *********** 2025-06-02 20:25:56.295245 | orchestrator | changed: [localhost] 2025-06-02 20:25:56.296138 | orchestrator | 2025-06-02 20:25:56.297546 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-02 20:25:56.298845 | orchestrator | Monday 02 June 2025 20:25:56 +0000 (0:00:04.251) 0:00:53.169 *********** 2025-06-02 20:25:59.817937 | orchestrator | ok: [localhost] 2025-06-02 20:25:59.818162 | orchestrator | 2025-06-02 20:25:59.818344 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:25:59.818367 | orchestrator | 2025-06-02 20:25:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 20:25:59.818381 | orchestrator | 2025-06-02 20:25:59 | INFO  | Please wait and do not abort execution. 2025-06-02 20:25:59.819307 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:25:59.819478 | orchestrator | 2025-06-02 20:25:59.819679 | orchestrator | 2025-06-02 20:25:59.820350 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:25:59.820800 | orchestrator | Monday 02 June 2025 20:25:59 +0000 (0:00:03.522) 0:00:56.692 *********** 2025-06-02 20:25:59.822853 | orchestrator | =============================================================================== 2025-06-02 20:25:59.823259 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.97s 2025-06-02 20:25:59.825558 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.42s 2025-06-02 20:25:59.826438 | orchestrator | Create public network --------------------------------------------------- 7.15s 2025-06-02 20:25:59.826841 | orchestrator | Get volume type local --------------------------------------------------- 7.01s 2025-06-02 20:25:59.827772 | orchestrator | Set public network to default ------------------------------------------- 6.85s 2025-06-02 20:25:59.828060 | orchestrator | Create volume type local ------------------------------------------------ 5.91s 2025-06-02 20:25:59.829005 | orchestrator | Create public subnet ---------------------------------------------------- 4.67s 2025-06-02 20:25:59.829629 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.25s 2025-06-02 20:25:59.829896 | orchestrator | Create manager role ----------------------------------------------------- 3.52s 2025-06-02 20:25:59.830805 | orchestrator | Gathering Facts --------------------------------------------------------- 1.85s 2025-06-02 20:26:02.125541 | orchestrator | 2025-06-02 20:26:02 | INFO  | It takes a moment until task 2eeb65df-cb84-4c6f-9930-9597059aa910 (image-manager) has been started and output is visible here. 2025-06-02 20:26:05.567189 | orchestrator | 2025-06-02 20:26:05 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-02 20:26:05.792606 | orchestrator | 2025-06-02 20:26:05 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-02 20:26:05.794299 | orchestrator | 2025-06-02 20:26:05 | INFO  | Importing image Cirros 0.6.2 2025-06-02 20:26:05.795177 | orchestrator | 2025-06-02 20:26:05 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 20:26:07.543443 | orchestrator | 2025-06-02 20:26:07 | INFO  | Waiting for image to leave queued state... 2025-06-02 20:26:09.591778 | orchestrator | 2025-06-02 20:26:09 | INFO  | Waiting for import to complete... 2025-06-02 20:26:19.732200 | orchestrator | 2025-06-02 20:26:19 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-02 20:26:19.947897 | orchestrator | 2025-06-02 20:26:19 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-02 20:26:19.948002 | orchestrator | 2025-06-02 20:26:19 | INFO  | Setting internal_version = 0.6.2 2025-06-02 20:26:19.949181 | orchestrator | 2025-06-02 20:26:19 | INFO  | Setting image_original_user = cirros 2025-06-02 20:26:19.949929 | orchestrator | 2025-06-02 20:26:19 | INFO  | Adding tag os:cirros 2025-06-02 20:26:20.193514 | orchestrator | 2025-06-02 20:26:20 | INFO  | Setting property architecture: x86_64 2025-06-02 20:26:20.461772 | orchestrator | 2025-06-02 20:26:20 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 20:26:20.661213 | orchestrator | 2025-06-02 20:26:20 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 20:26:20.868295 | orchestrator | 2025-06-02 20:26:20 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 20:26:21.091852 | orchestrator | 2025-06-02 20:26:21 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 20:26:21.284458 | orchestrator | 2025-06-02 20:26:21 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 20:26:21.525571 | orchestrator | 2025-06-02 20:26:21 | INFO  | Setting property os_distro: cirros 2025-06-02 20:26:21.788931 | orchestrator | 2025-06-02 20:26:21 | INFO  | Setting property replace_frequency: never 2025-06-02 20:26:21.963291 | orchestrator | 2025-06-02 20:26:21 | INFO  | Setting property uuid_validity: none 2025-06-02 20:26:22.234736 | orchestrator | 2025-06-02 20:26:22 | INFO  | Setting property provided_until: none 2025-06-02 20:26:22.438334 | orchestrator | 2025-06-02 20:26:22 | INFO  | Setting property image_description: Cirros 2025-06-02 20:26:22.650552 | orchestrator | 2025-06-02 20:26:22 | INFO  | Setting property image_name: Cirros 2025-06-02 20:26:22.877716 | orchestrator | 2025-06-02 20:26:22 | INFO  | Setting property internal_version: 0.6.2 2025-06-02 20:26:23.080611 | orchestrator | 2025-06-02 20:26:23 | INFO  | Setting property image_original_user: cirros 2025-06-02 20:26:23.279469 | orchestrator | 2025-06-02 20:26:23 | INFO  | Setting property os_version: 0.6.2 2025-06-02 20:26:23.518296 | orchestrator | 2025-06-02 20:26:23 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 20:26:23.732740 | orchestrator | 2025-06-02 20:26:23 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-02 20:26:23.951077 | orchestrator | 2025-06-02 20:26:23 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-02 20:26:23.951861 | orchestrator | 2025-06-02 20:26:23 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-02 20:26:23.952806 | orchestrator | 2025-06-02 20:26:23 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-02 20:26:24.156356 | orchestrator | 2025-06-02 20:26:24 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-02 20:26:24.356595 | orchestrator | 2025-06-02 20:26:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-02 20:26:24.357130 | orchestrator | 2025-06-02 20:26:24 | INFO  | Importing image Cirros 0.6.3 2025-06-02 20:26:24.357844 | orchestrator | 2025-06-02 20:26:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 20:26:25.540334 | orchestrator | 2025-06-02 20:26:25 | INFO  | Waiting for image to leave queued state... 2025-06-02 20:26:27.585767 | orchestrator | 2025-06-02 20:26:27 | INFO  | Waiting for import to complete... 2025-06-02 20:26:37.879153 | orchestrator | 2025-06-02 20:26:37 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-02 20:26:38.510133 | orchestrator | 2025-06-02 20:26:38 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-02 20:26:38.510208 | orchestrator | 2025-06-02 20:26:38 | INFO  | Setting internal_version = 0.6.3 2025-06-02 20:26:38.511358 | orchestrator | 2025-06-02 20:26:38 | INFO  | Setting image_original_user = cirros 2025-06-02 20:26:38.512400 | orchestrator | 2025-06-02 20:26:38 | INFO  | Adding tag os:cirros 2025-06-02 20:26:38.845089 | orchestrator | 2025-06-02 20:26:38 | INFO  | Setting property architecture: x86_64 2025-06-02 20:26:39.137594 | orchestrator | 2025-06-02 20:26:39 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 20:26:39.357920 | orchestrator | 2025-06-02 20:26:39 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 20:26:39.601993 | orchestrator | 2025-06-02 20:26:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 20:26:39.807606 | orchestrator | 2025-06-02 20:26:39 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 20:26:40.113522 | orchestrator | 2025-06-02 20:26:40 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 20:26:40.327568 | orchestrator | 2025-06-02 20:26:40 | INFO  | Setting property os_distro: cirros 2025-06-02 20:26:40.537094 | orchestrator | 2025-06-02 20:26:40 | INFO  | Setting property replace_frequency: never 2025-06-02 20:26:40.759502 | orchestrator | 2025-06-02 20:26:40 | INFO  | Setting property uuid_validity: none 2025-06-02 20:26:41.008135 | orchestrator | 2025-06-02 20:26:41 | INFO  | Setting property provided_until: none 2025-06-02 20:26:41.221177 | orchestrator | 2025-06-02 20:26:41 | INFO  | Setting property image_description: Cirros 2025-06-02 20:26:41.443060 | orchestrator | 2025-06-02 20:26:41 | INFO  | Setting property image_name: Cirros 2025-06-02 20:26:41.681137 | orchestrator | 2025-06-02 20:26:41 | INFO  | Setting property internal_version: 0.6.3 2025-06-02 20:26:41.903955 | orchestrator | 2025-06-02 20:26:41 | INFO  | Setting property image_original_user: cirros 2025-06-02 20:26:42.148166 | orchestrator | 2025-06-02 20:26:42 | INFO  | Setting property os_version: 0.6.3 2025-06-02 20:26:42.356646 | orchestrator | 2025-06-02 20:26:42 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 20:26:42.583073 | orchestrator | 2025-06-02 20:26:42 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-02 20:26:42.817337 | orchestrator | 2025-06-02 20:26:42 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-02 20:26:42.817622 | orchestrator | 2025-06-02 20:26:42 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-02 20:26:42.818769 | orchestrator | 2025-06-02 20:26:42 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-02 20:26:43.840909 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-02 20:26:45.783915 | orchestrator | 2025-06-02 20:26:45 | INFO  | date: 2025-06-02 2025-06-02 20:26:45.784014 | orchestrator | 2025-06-02 20:26:45 | INFO  | image: octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 20:26:45.785003 | orchestrator | 2025-06-02 20:26:45 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 20:26:45.785074 | orchestrator | 2025-06-02 20:26:45 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2.CHECKSUM 2025-06-02 20:26:45.841495 | orchestrator | 2025-06-02 20:26:45 | INFO  | checksum: 4244ae669e0302e4de8dd880cdee4c27c232e9d393dd18f3521b5d0e7c284b7c 2025-06-02 20:26:45.912391 | orchestrator | 2025-06-02 20:26:45 | INFO  | It takes a moment until task e7c1308a-f348-488c-b490-b13e0df4e36c (image-manager) has been started and output is visible here. 2025-06-02 20:26:46.141213 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-02 20:26:46.141921 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-02 20:26:48.409104 | orchestrator | 2025-06-02 20:26:48 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 20:26:48.425810 | orchestrator | 2025-06-02 20:26:48 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2: 200 2025-06-02 20:26:48.426589 | orchestrator | 2025-06-02 20:26:48 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-02 2025-06-02 20:26:48.427598 | orchestrator | 2025-06-02 20:26:48 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 20:26:49.537737 | orchestrator | 2025-06-02 20:26:49 | INFO  | Waiting for image to leave queued state... 2025-06-02 20:26:51.586322 | orchestrator | 2025-06-02 20:26:51 | INFO  | Waiting for import to complete... 2025-06-02 20:27:01.678400 | orchestrator | 2025-06-02 20:27:01 | INFO  | Waiting for import to complete... 2025-06-02 20:27:11.775032 | orchestrator | 2025-06-02 20:27:11 | INFO  | Waiting for import to complete... 2025-06-02 20:27:22.168368 | orchestrator | 2025-06-02 20:27:22 | INFO  | Waiting for import to complete... 2025-06-02 20:27:32.251121 | orchestrator | 2025-06-02 20:27:32 | INFO  | Waiting for import to complete... 2025-06-02 20:27:42.367809 | orchestrator | 2025-06-02 20:27:42 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-02' successfully completed, reloading images 2025-06-02 20:27:42.678781 | orchestrator | 2025-06-02 20:27:42 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 20:27:42.678897 | orchestrator | 2025-06-02 20:27:42 | INFO  | Setting internal_version = 2025-06-02 2025-06-02 20:27:42.678969 | orchestrator | 2025-06-02 20:27:42 | INFO  | Setting image_original_user = ubuntu 2025-06-02 20:27:42.678982 | orchestrator | 2025-06-02 20:27:42 | INFO  | Adding tag amphora 2025-06-02 20:27:42.916463 | orchestrator | 2025-06-02 20:27:42 | INFO  | Adding tag os:ubuntu 2025-06-02 20:27:43.150187 | orchestrator | 2025-06-02 20:27:43 | INFO  | Setting property architecture: x86_64 2025-06-02 20:27:43.350359 | orchestrator | 2025-06-02 20:27:43 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 20:27:43.574669 | orchestrator | 2025-06-02 20:27:43 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 20:27:43.743329 | orchestrator | 2025-06-02 20:27:43 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 20:27:43.958269 | orchestrator | 2025-06-02 20:27:43 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 20:27:44.172847 | orchestrator | 2025-06-02 20:27:44 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 20:27:44.387459 | orchestrator | 2025-06-02 20:27:44 | INFO  | Setting property os_distro: ubuntu 2025-06-02 20:27:44.607143 | orchestrator | 2025-06-02 20:27:44 | INFO  | Setting property replace_frequency: quarterly 2025-06-02 20:27:44.822720 | orchestrator | 2025-06-02 20:27:44 | INFO  | Setting property uuid_validity: last-1 2025-06-02 20:27:45.049373 | orchestrator | 2025-06-02 20:27:45 | INFO  | Setting property provided_until: none 2025-06-02 20:27:45.265768 | orchestrator | 2025-06-02 20:27:45 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-02 20:27:45.493359 | orchestrator | 2025-06-02 20:27:45 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-02 20:27:45.677003 | orchestrator | 2025-06-02 20:27:45 | INFO  | Setting property internal_version: 2025-06-02 2025-06-02 20:27:45.899542 | orchestrator | 2025-06-02 20:27:45 | INFO  | Setting property image_original_user: ubuntu 2025-06-02 20:27:46.151533 | orchestrator | 2025-06-02 20:27:46 | INFO  | Setting property os_version: 2025-06-02 2025-06-02 20:27:46.347649 | orchestrator | 2025-06-02 20:27:46 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 20:27:46.567184 | orchestrator | 2025-06-02 20:27:46 | INFO  | Setting property image_build_date: 2025-06-02 2025-06-02 20:27:46.797952 | orchestrator | 2025-06-02 20:27:46 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 20:27:46.798695 | orchestrator | 2025-06-02 20:27:46 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 20:27:46.986957 | orchestrator | 2025-06-02 20:27:46 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-02 20:27:46.988399 | orchestrator | 2025-06-02 20:27:46 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-02 20:27:46.988716 | orchestrator | 2025-06-02 20:27:46 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-02 20:27:46.989677 | orchestrator | 2025-06-02 20:27:46 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-02 20:27:47.634173 | orchestrator | ok: Runtime: 0:03:03.122084 2025-06-02 20:27:47.700675 | 2025-06-02 20:27:47.700811 | TASK [Run checks] 2025-06-02 20:27:48.412082 | orchestrator | + set -e 2025-06-02 20:27:48.412282 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 20:27:48.412318 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 20:27:48.412349 | orchestrator | ++ INTERACTIVE=false 2025-06-02 20:27:48.412371 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 20:27:48.412392 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 20:27:48.412413 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 20:27:48.413330 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 20:27:48.419335 | orchestrator | 2025-06-02 20:27:48.419425 | orchestrator | # CHECK 2025-06-02 20:27:48.419442 | orchestrator | 2025-06-02 20:27:48.419454 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 20:27:48.419472 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 20:27:48.419484 | orchestrator | + echo 2025-06-02 20:27:48.419503 | orchestrator | + echo '# CHECK' 2025-06-02 20:27:48.419522 | orchestrator | + echo 2025-06-02 20:27:48.419547 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 20:27:48.420189 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 20:27:48.483894 | orchestrator | 2025-06-02 20:27:48.483977 | orchestrator | ## Containers @ testbed-manager 2025-06-02 20:27:48.483987 | orchestrator | 2025-06-02 20:27:48.483997 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 20:27:48.484004 | orchestrator | + echo 2025-06-02 20:27:48.484011 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-02 20:27:48.484017 | orchestrator | + echo 2025-06-02 20:27:48.484024 | orchestrator | + osism container testbed-manager ps 2025-06-02 20:27:50.478783 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 20:27:50.478940 | orchestrator | e1efb4e56d26 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-06-02 20:27:50.478963 | orchestrator | d738a47d1926 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2025-06-02 20:27:50.478980 | orchestrator | 5d94d6dc9e0d registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-02 20:27:50.478990 | orchestrator | 811cbac7e76e registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-02 20:27:50.479000 | orchestrator | 45be553b062b registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2025-06-02 20:27:50.479011 | orchestrator | e75c17755bbf registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-06-02 20:27:50.479026 | orchestrator | aef35e068fee registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-02 20:27:50.479036 | orchestrator | 3adde0621d16 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-06-02 20:27:50.479046 | orchestrator | dc5e27c88a65 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-02 20:27:50.479078 | orchestrator | c19e7b156361 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-06-02 20:27:50.479090 | orchestrator | 779de7960086 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 31 minutes openstackclient 2025-06-02 20:27:50.479099 | orchestrator | b4f6e77b710d registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 31 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-06-02 20:27:50.479362 | orchestrator | afe7222f9659 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-02 20:27:50.479389 | orchestrator | 0bd4cefc2257 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 55 minutes ago Up 37 minutes (healthy) manager-inventory_reconciler-1 2025-06-02 20:27:50.479398 | orchestrator | 8b614c05ba78 registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-06-02 20:27:50.479408 | orchestrator | 6290da65e78e registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-06-02 20:27:50.479414 | orchestrator | fd23bc01573d registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-06-02 20:27:50.479419 | orchestrator | f25366a1b6d4 registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) osism-ansible 2025-06-02 20:27:50.479424 | orchestrator | 5ea3cad2cc02 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 55 minutes ago Up 38 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-02 20:27:50.479430 | orchestrator | fc85d152511b registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 55 minutes ago Up 38 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-02 20:27:50.479435 | orchestrator | 5d8a36511aba registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-02 20:27:50.479440 | orchestrator | 7684b1240c89 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-openstack-1 2025-06-02 20:27:50.479446 | orchestrator | e076f6d46a6a registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-beat-1 2025-06-02 20:27:50.479460 | orchestrator | 7e9ec2a5cb43 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-flower-1 2025-06-02 20:27:50.479466 | orchestrator | 65b0131d06e1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-listener-1 2025-06-02 20:27:50.479471 | orchestrator | 200dcef7e64b registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 55 minutes ago Up 38 minutes (healthy) osismclient 2025-06-02 20:27:50.479476 | orchestrator | 9881c5d0a626 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 55 minutes ago Up 38 minutes (healthy) 6379/tcp manager-redis-1 2025-06-02 20:27:50.479481 | orchestrator | 35f4ffe564df registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 56 minutes ago Up 56 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-02 20:27:50.706074 | orchestrator | 2025-06-02 20:27:50.706175 | orchestrator | ## Images @ testbed-manager 2025-06-02 20:27:50.706188 | orchestrator | 2025-06-02 20:27:50.706197 | orchestrator | + echo 2025-06-02 20:27:50.706207 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-02 20:27:50.706218 | orchestrator | + echo 2025-06-02 20:27:50.706227 | orchestrator | + osism container testbed-manager images 2025-06-02 20:27:52.666222 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 20:27:52.666316 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 f5f0b51afbcc 7 hours ago 574MB 2025-06-02 20:27:52.666329 | orchestrator | registry.osism.tech/osism/homer v25.05.2 e73e0506845d 17 hours ago 11.5MB 2025-06-02 20:27:52.666336 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 86ee4afc8387 17 hours ago 225MB 2025-06-02 20:27:52.666343 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 2 days ago 578MB 2025-06-02 20:27:52.666367 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 20:27:52.666374 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 20:27:52.666381 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 20:27:52.666388 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 2 days ago 892MB 2025-06-02 20:27:52.666394 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 2 days ago 361MB 2025-06-02 20:27:52.666401 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 20:27:52.666407 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 20:27:52.666414 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 2 days ago 457MB 2025-06-02 20:27:52.666421 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 2 days ago 538MB 2025-06-02 20:27:52.666444 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 3 days ago 1.21GB 2025-06-02 20:27:52.666452 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 3 days ago 308MB 2025-06-02 20:27:52.666458 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 3 days ago 297MB 2025-06-02 20:27:52.666465 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 4 days ago 41.4MB 2025-06-02 20:27:52.666471 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 6 days ago 224MB 2025-06-02 20:27:52.666478 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 3 weeks ago 453MB 2025-06-02 20:27:52.666485 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-06-02 20:27:52.666491 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-02 20:27:52.666498 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-02 20:27:52.666504 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-06-02 20:27:52.886545 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 20:27:52.886687 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 20:27:52.933850 | orchestrator | 2025-06-02 20:27:52.933948 | orchestrator | ## Containers @ testbed-node-0 2025-06-02 20:27:52.933961 | orchestrator | 2025-06-02 20:27:52.933972 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 20:27:52.933983 | orchestrator | + echo 2025-06-02 20:27:52.933993 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-02 20:27:52.934004 | orchestrator | + echo 2025-06-02 20:27:52.934052 | orchestrator | + osism container testbed-node-0 ps 2025-06-02 20:27:55.065678 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 20:27:55.065771 | orchestrator | b640f83b9fc0 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-02 20:27:55.065781 | orchestrator | 16e35c718624 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-02 20:27:55.065788 | orchestrator | ddc4cd60b4dd registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-02 20:27:55.065794 | orchestrator | d8dda8a471c3 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 20:27:55.065800 | orchestrator | 08d545512787 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-06-02 20:27:55.065805 | orchestrator | cdee7bb83525 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-02 20:27:55.065811 | orchestrator | 1aaab29d24a6 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-02 20:27:55.065829 | orchestrator | 3f90702aa47b registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-02 20:27:55.065835 | orchestrator | 96bf7af13d69 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-02 20:27:55.065857 | orchestrator | a5a6ead146d7 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-02 20:27:55.065863 | orchestrator | be8c8003b993 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-02 20:27:55.065868 | orchestrator | 580ab861c292 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-02 20:27:55.065874 | orchestrator | a9b62a46e5cf registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-02 20:27:55.065879 | orchestrator | 1630cb464c4b registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-06-02 20:27:55.065885 | orchestrator | 5403be575f87 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-02 20:27:55.065890 | orchestrator | 944a983a3b0b registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-02 20:27:55.065896 | orchestrator | 82d7985032c9 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-06-02 20:27:55.065901 | orchestrator | e85711525b1d registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2025-06-02 20:27:55.065907 | orchestrator | 100a630f2285 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2025-06-02 20:27:55.065926 | orchestrator | 70e6ebae9e47 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-02 20:27:55.065932 | orchestrator | 9d5edf1cf968 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-02 20:27:55.065937 | orchestrator | 49e3e872be9c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-06-02 20:27:55.065943 | orchestrator | ef0268f831d4 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-06-02 20:27:55.065948 | orchestrator | b7af12d5983a registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-06-02 20:27:55.065954 | orchestrator | 0e859d6e8a1a registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-06-02 20:27:55.065959 | orchestrator | 3f34a7f6cd73 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2025-06-02 20:27:55.065968 | orchestrator | ae254cde8346 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-06-02 20:27:55.065980 | orchestrator | 074eeef12f29 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-02 20:27:55.065985 | orchestrator | 838b2c195ec9 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-02 20:27:55.065991 | orchestrator | 600b0323e25e registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-02 20:27:55.066000 | orchestrator | 86ccaa26a3de registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-02 20:27:55.066006 | orchestrator | 239505f5eab9 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-06-02 20:27:55.066011 | orchestrator | bf128771fa9b registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-02 20:27:55.066043 | orchestrator | 77ba541ebb7e registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-02 20:27:55.066049 | orchestrator | a37f437f9c4a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-06-02 20:27:55.066054 | orchestrator | c27b35308ecb registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-02 20:27:55.066059 | orchestrator | 5e5ef26ad343 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-02 20:27:55.066065 | orchestrator | 4b3e8e579e47 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-06-02 20:27:55.066074 | orchestrator | 7335ec843117 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-06-02 20:27:55.066079 | orchestrator | 2e1853c7a1d7 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-06-02 20:27:55.066089 | orchestrator | 107d4d81f950 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-06-02 20:27:55.066095 | orchestrator | bf0095668adb registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-02 20:27:55.066101 | orchestrator | 083c7a0651bf registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-06-02 20:27:55.066106 | orchestrator | 8501148f9e41 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-02 20:27:55.066136 | orchestrator | 5e6e5e720542 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-02 20:27:55.066147 | orchestrator | ff3fb5831d79 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-02 20:27:55.066153 | orchestrator | 9ba9965c4d52 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-02 20:27:55.066159 | orchestrator | f2d3506ec45d registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-02 20:27:55.066164 | orchestrator | 5dc41a18d61d registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-02 20:27:55.066170 | orchestrator | 371ec75fcc66 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-02 20:27:55.066175 | orchestrator | b03b54a8a6d1 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-02 20:27:55.066181 | orchestrator | cb990c344973 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-02 20:27:55.309400 | orchestrator | 2025-06-02 20:27:55.309515 | orchestrator | ## Images @ testbed-node-0 2025-06-02 20:27:55.309536 | orchestrator | 2025-06-02 20:27:55.309551 | orchestrator | + echo 2025-06-02 20:27:55.309608 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-02 20:27:55.309624 | orchestrator | + echo 2025-06-02 20:27:55.309636 | orchestrator | + osism container testbed-node-0 images 2025-06-02 20:27:57.412399 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 20:27:57.412499 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 20:27:57.412513 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 20:27:57.412524 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 20:27:57.412535 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-02 20:27:57.412546 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-02 20:27:57.412583 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 20:27:57.412595 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 20:27:57.412605 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 20:27:57.412616 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 20:27:57.412627 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 20:27:57.412637 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 20:27:57.412648 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 20:27:57.412658 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 20:27:57.412694 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 20:27:57.412704 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 20:27:57.412715 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 20:27:57.412726 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 20:27:57.412736 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 20:27:57.412748 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 20:27:57.412775 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 20:27:57.412787 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 20:27:57.412798 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 20:27:57.412808 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 20:27:57.412819 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 20:27:57.412829 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 20:27:57.412840 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 2 days ago 1.04GB 2025-06-02 20:27:57.412851 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 2 days ago 1.04GB 2025-06-02 20:27:57.412861 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 2 days ago 1.04GB 2025-06-02 20:27:57.412872 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 2 days ago 1.04GB 2025-06-02 20:27:57.412883 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 20:27:57.412894 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 20:27:57.412928 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-02 20:27:57.412940 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-02 20:27:57.412950 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-02 20:27:57.412961 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-02 20:27:57.412971 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-02 20:27:57.412982 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 20:27:57.412993 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 20:27:57.413003 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 20:27:57.413014 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 20:27:57.413032 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 20:27:57.413042 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 20:27:57.413053 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 20:27:57.413063 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 20:27:57.413080 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 2 days ago 1.04GB 2025-06-02 20:27:57.413090 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 2 days ago 1.04GB 2025-06-02 20:27:57.413101 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 20:27:57.413111 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 20:27:57.413122 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 20:27:57.413133 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 20:27:57.413143 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 20:27:57.413154 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 20:27:57.413164 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 20:27:57.413175 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 20:27:57.413185 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 20:27:57.413196 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 20:27:57.413206 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 2 days ago 1.11GB 2025-06-02 20:27:57.413217 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 2 days ago 1.12GB 2025-06-02 20:27:57.413227 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 20:27:57.413238 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 20:27:57.413248 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 20:27:57.413259 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 20:27:57.413270 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 20:27:57.644211 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 20:27:57.644623 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 20:27:57.693213 | orchestrator | 2025-06-02 20:27:57.693340 | orchestrator | ## Containers @ testbed-node-1 2025-06-02 20:27:57.693357 | orchestrator | 2025-06-02 20:27:57.693368 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 20:27:57.693378 | orchestrator | + echo 2025-06-02 20:27:57.693389 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-02 20:27:57.693400 | orchestrator | + echo 2025-06-02 20:27:57.693436 | orchestrator | + osism container testbed-node-1 ps 2025-06-02 20:27:59.805781 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 20:27:59.805910 | orchestrator | b703e20619af registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-02 20:27:59.805928 | orchestrator | c99a3e6507f9 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-02 20:27:59.805940 | orchestrator | 6f82fb1cf375 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-02 20:27:59.805951 | orchestrator | 6332423acb4a registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-02 20:27:59.805982 | orchestrator | f03adfa8eabe registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-02 20:27:59.805994 | orchestrator | d99fa088cff9 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-02 20:27:59.806005 | orchestrator | 0b48a9d789a7 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-02 20:27:59.806078 | orchestrator | 4ff4d3094fdb registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-02 20:27:59.806093 | orchestrator | b124f20029b8 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-02 20:27:59.806108 | orchestrator | b71436cb2186 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-02 20:27:59.806127 | orchestrator | 1336b7aa0f77 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-02 20:27:59.806145 | orchestrator | 338f93a327ef registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-02 20:27:59.806163 | orchestrator | 4f7f0b746aff registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-02 20:27:59.806180 | orchestrator | d7bc305ddf31 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-06-02 20:27:59.806198 | orchestrator | 35f609d56a75 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-02 20:27:59.806216 | orchestrator | 5430359c2c56 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 13 minutes (healthy) neutron_server 2025-06-02 20:27:59.806233 | orchestrator | 98ae3bf12108 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-06-02 20:27:59.806276 | orchestrator | 8314105f9bc5 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2025-06-02 20:27:59.806299 | orchestrator | 8a4029ba880a registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-02 20:27:59.806343 | orchestrator | eab8613c8dbc registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-02 20:27:59.806364 | orchestrator | b81f15a07cbc registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-02 20:27:59.806383 | orchestrator | ba85e72382f4 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-06-02 20:27:59.806402 | orchestrator | 39c701b11a18 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-06-02 20:27:59.806422 | orchestrator | 90849c9cc871 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-06-02 20:27:59.806445 | orchestrator | 7584afc93caa registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-06-02 20:27:59.806459 | orchestrator | be291a9bb614 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2025-06-02 20:27:59.806472 | orchestrator | 9ab2726a93d1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-06-02 20:27:59.806485 | orchestrator | 1906b18003bf registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-02 20:27:59.806499 | orchestrator | 4b40332d32a6 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-02 20:27:59.806512 | orchestrator | 5081793a5328 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-02 20:27:59.806525 | orchestrator | cd32fcabf918 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-02 20:27:59.806538 | orchestrator | fcb08538db69 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-02 20:27:59.806577 | orchestrator | 5ba73a8190a2 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-02 20:27:59.806591 | orchestrator | 1215204e7bf0 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-02 20:27:59.806604 | orchestrator | bd133f9dfba3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-06-02 20:27:59.806626 | orchestrator | 7e97d6c75580 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-02 20:27:59.806639 | orchestrator | 7e81d2c3e529 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-02 20:27:59.806651 | orchestrator | 8b88fdd82d6b registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-06-02 20:27:59.806663 | orchestrator | 0dfbbc43f36b registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-06-02 20:27:59.806674 | orchestrator | 78abfa3f2859 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-06-02 20:27:59.806692 | orchestrator | 64f51fb39119 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-06-02 20:27:59.806704 | orchestrator | 243f156c4bc8 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-02 20:27:59.806715 | orchestrator | 2487d2f5f47a registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-02 20:27:59.806726 | orchestrator | fe182c76eb92 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-06-02 20:27:59.806736 | orchestrator | 1f92a5fbc2d8 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-02 20:27:59.806747 | orchestrator | 9ec1e1d4908f registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-02 20:27:59.806758 | orchestrator | 75e868e92ab6 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-02 20:27:59.806775 | orchestrator | 9206d0c611d2 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-02 20:27:59.806786 | orchestrator | f501b29bd4b9 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-02 20:27:59.806797 | orchestrator | 7c6a1f0c0202 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-02 20:27:59.806808 | orchestrator | e0f12a843e70 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-02 20:27:59.806819 | orchestrator | bc78be191a1f registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-02 20:28:00.041150 | orchestrator | 2025-06-02 20:28:00.041353 | orchestrator | ## Images @ testbed-node-1 2025-06-02 20:28:00.041383 | orchestrator | 2025-06-02 20:28:00.041400 | orchestrator | + echo 2025-06-02 20:28:00.041416 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-02 20:28:00.041433 | orchestrator | + echo 2025-06-02 20:28:00.041445 | orchestrator | + osism container testbed-node-1 images 2025-06-02 20:28:02.126287 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 20:28:02.126403 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 20:28:02.126414 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 20:28:02.126420 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 20:28:02.126426 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-02 20:28:02.126433 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-02 20:28:02.126439 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 20:28:02.126446 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 20:28:02.126452 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 20:28:02.126459 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 20:28:02.126465 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 20:28:02.126471 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 20:28:02.126477 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 20:28:02.126483 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 20:28:02.126489 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 20:28:02.126496 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 20:28:02.126503 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 20:28:02.126510 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 20:28:02.126517 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 20:28:02.126524 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 20:28:02.126530 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 20:28:02.126537 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 20:28:02.126543 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 20:28:02.126654 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 20:28:02.126663 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 20:28:02.126670 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 20:28:02.126678 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 20:28:02.126693 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 20:28:02.126700 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 20:28:02.126706 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 20:28:02.126713 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 20:28:02.126720 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 20:28:02.126760 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 20:28:02.126768 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 20:28:02.126775 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 20:28:02.126782 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 20:28:02.126789 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 20:28:02.126796 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 20:28:02.126803 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 20:28:02.126810 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 20:28:02.126817 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 20:28:02.126824 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 20:28:02.126831 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 20:28:02.126840 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 20:28:02.126848 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 20:28:02.126854 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 20:28:02.126861 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 20:28:02.126868 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 20:28:02.126875 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 20:28:02.126883 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 20:28:02.126890 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 20:28:02.374190 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 20:28:02.374302 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 20:28:02.425125 | orchestrator | 2025-06-02 20:28:02.425263 | orchestrator | ## Containers @ testbed-node-2 2025-06-02 20:28:02.425294 | orchestrator | 2025-06-02 20:28:02.425315 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 20:28:02.425335 | orchestrator | + echo 2025-06-02 20:28:02.425388 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-02 20:28:02.425402 | orchestrator | + echo 2025-06-02 20:28:02.425412 | orchestrator | + osism container testbed-node-2 ps 2025-06-02 20:28:04.504411 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 20:28:04.504520 | orchestrator | 092c3b6e3b1d registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-02 20:28:04.504528 | orchestrator | 69ab570f4d54 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-02 20:28:04.504533 | orchestrator | c2d2c9d08761 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-02 20:28:04.504537 | orchestrator | 481cf6e9cef4 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-02 20:28:04.504541 | orchestrator | 16e715495cdb registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-02 20:28:04.504606 | orchestrator | 36b651f23e05 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-02 20:28:04.504613 | orchestrator | a2b703443a1f registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-02 20:28:04.504617 | orchestrator | d0c1adfa1005 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-02 20:28:04.504621 | orchestrator | 5691ba0f5618 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-02 20:28:04.504627 | orchestrator | ac7c5f1ddcad registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-02 20:28:04.504632 | orchestrator | fcf4b75be694 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-02 20:28:04.504636 | orchestrator | 7498083b6a55 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-02 20:28:04.504640 | orchestrator | 63f51d711a8e registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-02 20:28:04.504644 | orchestrator | baeb2e19dbde registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-06-02 20:28:04.504666 | orchestrator | b3b05c6970b7 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-02 20:28:04.504670 | orchestrator | 867eedf8a571 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-02 20:28:04.504674 | orchestrator | c9c50ff912c4 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-06-02 20:28:04.504696 | orchestrator | 3ebfc58ce3fe registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2025-06-02 20:28:04.504700 | orchestrator | c5a8e3d668da registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-02 20:28:04.504719 | orchestrator | 40182a93d1b4 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-02 20:28:04.504723 | orchestrator | f82943c18ffd registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-02 20:28:04.504727 | orchestrator | 481a1f79386b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-06-02 20:28:04.504731 | orchestrator | 1d028f2a0f67 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-06-02 20:28:04.504735 | orchestrator | 3e9cfbc12f8b registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-06-02 20:28:04.504738 | orchestrator | af8942d70f55 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-06-02 20:28:04.504742 | orchestrator | f88517488c98 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-06-02 20:28:04.504746 | orchestrator | c978ebc61381 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-02 20:28:04.504750 | orchestrator | 39637682e1e7 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-02 20:28:04.504754 | orchestrator | 466a8fc282e7 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-02 20:28:04.504758 | orchestrator | 705c0751ad5f registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-02 20:28:04.504761 | orchestrator | 6aff73996ee9 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-02 20:28:04.504765 | orchestrator | 865f701a0cdd registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-02 20:28:04.504769 | orchestrator | 6d9dfb9a3048 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-02 20:28:04.504773 | orchestrator | 888a3bdcf964 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-02 20:28:04.504777 | orchestrator | 7808090f0dbe registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-06-02 20:28:04.504786 | orchestrator | 033ffd26eb3f registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-02 20:28:04.504789 | orchestrator | 2623e5ff1e9d registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-02 20:28:04.504793 | orchestrator | b9c5dbee7662 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-02 20:28:04.504797 | orchestrator | 6d9c304d536d registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-06-02 20:28:04.504801 | orchestrator | ab4c87bef134 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-06-02 20:28:04.504808 | orchestrator | 4d9dd9bce0d2 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-06-02 20:28:04.504812 | orchestrator | 9780e1960c4b registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-02 20:28:04.504816 | orchestrator | e8772879951a registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-02 20:28:04.504820 | orchestrator | af9cfd845014 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-06-02 20:28:04.504826 | orchestrator | 5e314642dbd1 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-02 20:28:04.504832 | orchestrator | ed2d3fcfe36e registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-02 20:28:04.504839 | orchestrator | 6e8ad12035f2 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-02 20:28:04.504849 | orchestrator | 216cc621395e registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-02 20:28:04.504855 | orchestrator | 28835e12f700 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-02 20:28:04.504861 | orchestrator | 5eeba81e2907 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-02 20:28:04.504867 | orchestrator | 70ad444be417 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-02 20:28:04.504873 | orchestrator | 19716e6e4195 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-02 20:28:04.738378 | orchestrator | 2025-06-02 20:28:04.738488 | orchestrator | ## Images @ testbed-node-2 2025-06-02 20:28:04.738504 | orchestrator | 2025-06-02 20:28:04.738516 | orchestrator | + echo 2025-06-02 20:28:04.738528 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-02 20:28:04.738540 | orchestrator | + echo 2025-06-02 20:28:04.738658 | orchestrator | + osism container testbed-node-2 images 2025-06-02 20:28:06.815140 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 20:28:06.815246 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 20:28:06.815260 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 20:28:06.815272 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 20:28:06.815299 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-02 20:28:06.815311 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-02 20:28:06.815322 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 20:28:06.815333 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 20:28:06.815344 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 20:28:06.815355 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 20:28:06.815366 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 20:28:06.815377 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 20:28:06.815388 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 20:28:06.815399 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 20:28:06.815409 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 20:28:06.815420 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 20:28:06.815433 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 20:28:06.815444 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 20:28:06.815454 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 20:28:06.815465 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 20:28:06.815476 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 20:28:06.815487 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 20:28:06.815498 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 20:28:06.815509 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 20:28:06.815520 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 20:28:06.815531 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 20:28:06.815620 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 20:28:06.815633 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 20:28:06.815644 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 20:28:06.815654 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 20:28:06.815665 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 20:28:06.815679 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 20:28:06.815710 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 20:28:06.815723 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 20:28:06.815735 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 20:28:06.815748 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 20:28:06.815760 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 20:28:06.815774 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 20:28:06.815786 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 20:28:06.815800 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 20:28:06.815812 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 20:28:06.815826 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 20:28:06.815839 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 20:28:06.815850 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 20:28:06.815860 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 20:28:06.815871 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 20:28:06.815881 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 20:28:06.815892 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 20:28:06.815903 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 20:28:06.815923 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 20:28:06.815935 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 20:28:07.040741 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-02 20:28:07.049774 | orchestrator | + set -e 2025-06-02 20:28:07.049856 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 20:28:07.051423 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 20:28:07.051495 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 20:28:07.051503 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 20:28:07.051510 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 20:28:07.051517 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 20:28:07.051525 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 20:28:07.051531 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 20:28:07.051539 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 20:28:07.051590 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 20:28:07.051603 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 20:28:07.051609 | orchestrator | ++ export ARA=false 2025-06-02 20:28:07.051616 | orchestrator | ++ ARA=false 2025-06-02 20:28:07.051623 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 20:28:07.051630 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 20:28:07.051637 | orchestrator | ++ export TEMPEST=false 2025-06-02 20:28:07.051644 | orchestrator | ++ TEMPEST=false 2025-06-02 20:28:07.051650 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 20:28:07.051657 | orchestrator | ++ IS_ZUUL=true 2025-06-02 20:28:07.051663 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2025-06-02 20:28:07.051670 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2025-06-02 20:28:07.051677 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 20:28:07.051684 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 20:28:07.051690 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 20:28:07.051697 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 20:28:07.051703 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 20:28:07.051710 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 20:28:07.051717 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 20:28:07.051723 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 20:28:07.051730 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 20:28:07.051737 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-02 20:28:07.057257 | orchestrator | + set -e 2025-06-02 20:28:07.057310 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 20:28:07.057319 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 20:28:07.057328 | orchestrator | ++ INTERACTIVE=false 2025-06-02 20:28:07.057336 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 20:28:07.057344 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 20:28:07.057352 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 20:28:07.058453 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 20:28:07.065013 | orchestrator | 2025-06-02 20:28:07.065091 | orchestrator | # Ceph status 2025-06-02 20:28:07.065108 | orchestrator | 2025-06-02 20:28:07.065123 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 20:28:07.065139 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 20:28:07.065154 | orchestrator | + echo 2025-06-02 20:28:07.065174 | orchestrator | + echo '# Ceph status' 2025-06-02 20:28:07.065190 | orchestrator | + echo 2025-06-02 20:28:07.065205 | orchestrator | + ceph -s 2025-06-02 20:28:07.620988 | orchestrator | cluster: 2025-06-02 20:28:07.621139 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-02 20:28:07.621158 | orchestrator | health: HEALTH_OK 2025-06-02 20:28:07.621170 | orchestrator | 2025-06-02 20:28:07.621182 | orchestrator | services: 2025-06-02 20:28:07.621194 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-06-02 20:28:07.621206 | orchestrator | mgr: testbed-node-1(active, since 15m), standbys: testbed-node-2, testbed-node-0 2025-06-02 20:28:07.621218 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-02 20:28:07.621229 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 24m) 2025-06-02 20:28:07.621240 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-02 20:28:07.621251 | orchestrator | 2025-06-02 20:28:07.621262 | orchestrator | data: 2025-06-02 20:28:07.621273 | orchestrator | volumes: 1/1 healthy 2025-06-02 20:28:07.621285 | orchestrator | pools: 14 pools, 401 pgs 2025-06-02 20:28:07.621304 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-02 20:28:07.621322 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-02 20:28:07.621340 | orchestrator | pgs: 401 active+clean 2025-06-02 20:28:07.621359 | orchestrator | 2025-06-02 20:28:07.663383 | orchestrator | 2025-06-02 20:28:07.663504 | orchestrator | # Ceph versions 2025-06-02 20:28:07.663531 | orchestrator | 2025-06-02 20:28:07.663610 | orchestrator | + echo 2025-06-02 20:28:07.663631 | orchestrator | + echo '# Ceph versions' 2025-06-02 20:28:07.663651 | orchestrator | + echo 2025-06-02 20:28:07.663670 | orchestrator | + ceph versions 2025-06-02 20:28:08.281410 | orchestrator | { 2025-06-02 20:28:08.281519 | orchestrator | "mon": { 2025-06-02 20:28:08.281644 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 20:28:08.281669 | orchestrator | }, 2025-06-02 20:28:08.281688 | orchestrator | "mgr": { 2025-06-02 20:28:08.281707 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 20:28:08.281727 | orchestrator | }, 2025-06-02 20:28:08.281746 | orchestrator | "osd": { 2025-06-02 20:28:08.281763 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-02 20:28:08.281782 | orchestrator | }, 2025-06-02 20:28:08.281819 | orchestrator | "mds": { 2025-06-02 20:28:08.281841 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 20:28:08.281861 | orchestrator | }, 2025-06-02 20:28:08.281878 | orchestrator | "rgw": { 2025-06-02 20:28:08.281889 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 20:28:08.281900 | orchestrator | }, 2025-06-02 20:28:08.281911 | orchestrator | "overall": { 2025-06-02 20:28:08.281922 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-02 20:28:08.281933 | orchestrator | } 2025-06-02 20:28:08.281946 | orchestrator | } 2025-06-02 20:28:08.324886 | orchestrator | 2025-06-02 20:28:08.324981 | orchestrator | # Ceph OSD tree 2025-06-02 20:28:08.324997 | orchestrator | 2025-06-02 20:28:08.325009 | orchestrator | + echo 2025-06-02 20:28:08.325020 | orchestrator | + echo '# Ceph OSD tree' 2025-06-02 20:28:08.325032 | orchestrator | + echo 2025-06-02 20:28:08.325043 | orchestrator | + ceph osd df tree 2025-06-02 20:28:08.805173 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-02 20:28:08.805287 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-02 20:28:08.805302 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-02 20:28:08.805314 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.36 0.91 195 up osd.2 2025-06-02 20:28:08.805325 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.47 1.09 195 up osd.5 2025-06-02 20:28:08.805336 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-02 20:28:08.805347 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 884 MiB 811 MiB 1 KiB 74 MiB 19 GiB 4.32 0.73 174 up osd.0 2025-06-02 20:28:08.805358 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 18 GiB 7.51 1.27 218 up osd.3 2025-06-02 20:28:08.805368 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-02 20:28:08.805379 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.36 1.07 192 up osd.1 2025-06-02 20:28:08.805390 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.48 0.93 196 up osd.4 2025-06-02 20:28:08.805401 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-02 20:28:08.805412 | orchestrator | MIN/MAX VAR: 0.73/1.27 STDDEV: 1.01 2025-06-02 20:28:08.846494 | orchestrator | 2025-06-02 20:28:08.846593 | orchestrator | # Ceph monitor status 2025-06-02 20:28:08.846605 | orchestrator | 2025-06-02 20:28:08.846614 | orchestrator | + echo 2025-06-02 20:28:08.846622 | orchestrator | + echo '# Ceph monitor status' 2025-06-02 20:28:08.846631 | orchestrator | + echo 2025-06-02 20:28:08.846640 | orchestrator | + ceph mon stat 2025-06-02 20:28:09.444342 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-02 20:28:09.488337 | orchestrator | 2025-06-02 20:28:09.488455 | orchestrator | # Ceph quorum status 2025-06-02 20:28:09.488474 | orchestrator | 2025-06-02 20:28:09.488486 | orchestrator | + echo 2025-06-02 20:28:09.488498 | orchestrator | + echo '# Ceph quorum status' 2025-06-02 20:28:09.488510 | orchestrator | + echo 2025-06-02 20:28:09.488520 | orchestrator | + ceph quorum_status 2025-06-02 20:28:09.488893 | orchestrator | + jq 2025-06-02 20:28:10.103373 | orchestrator | { 2025-06-02 20:28:10.103476 | orchestrator | "election_epoch": 8, 2025-06-02 20:28:10.103492 | orchestrator | "quorum": [ 2025-06-02 20:28:10.103504 | orchestrator | 0, 2025-06-02 20:28:10.103516 | orchestrator | 1, 2025-06-02 20:28:10.103528 | orchestrator | 2 2025-06-02 20:28:10.103601 | orchestrator | ], 2025-06-02 20:28:10.103611 | orchestrator | "quorum_names": [ 2025-06-02 20:28:10.103618 | orchestrator | "testbed-node-0", 2025-06-02 20:28:10.103625 | orchestrator | "testbed-node-1", 2025-06-02 20:28:10.103632 | orchestrator | "testbed-node-2" 2025-06-02 20:28:10.103638 | orchestrator | ], 2025-06-02 20:28:10.103645 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-02 20:28:10.103653 | orchestrator | "quorum_age": 1681, 2025-06-02 20:28:10.103660 | orchestrator | "features": { 2025-06-02 20:28:10.103667 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-02 20:28:10.103674 | orchestrator | "quorum_mon": [ 2025-06-02 20:28:10.103680 | orchestrator | "kraken", 2025-06-02 20:28:10.103687 | orchestrator | "luminous", 2025-06-02 20:28:10.103693 | orchestrator | "mimic", 2025-06-02 20:28:10.103700 | orchestrator | "osdmap-prune", 2025-06-02 20:28:10.103706 | orchestrator | "nautilus", 2025-06-02 20:28:10.103713 | orchestrator | "octopus", 2025-06-02 20:28:10.103719 | orchestrator | "pacific", 2025-06-02 20:28:10.103726 | orchestrator | "elector-pinging", 2025-06-02 20:28:10.103732 | orchestrator | "quincy", 2025-06-02 20:28:10.103738 | orchestrator | "reef" 2025-06-02 20:28:10.103745 | orchestrator | ] 2025-06-02 20:28:10.103751 | orchestrator | }, 2025-06-02 20:28:10.103758 | orchestrator | "monmap": { 2025-06-02 20:28:10.103765 | orchestrator | "epoch": 1, 2025-06-02 20:28:10.103771 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-02 20:28:10.103779 | orchestrator | "modified": "2025-06-02T19:59:51.425237Z", 2025-06-02 20:28:10.103785 | orchestrator | "created": "2025-06-02T19:59:51.425237Z", 2025-06-02 20:28:10.103792 | orchestrator | "min_mon_release": 18, 2025-06-02 20:28:10.103798 | orchestrator | "min_mon_release_name": "reef", 2025-06-02 20:28:10.103805 | orchestrator | "election_strategy": 1, 2025-06-02 20:28:10.103812 | orchestrator | "disallowed_leaders: ": "", 2025-06-02 20:28:10.103823 | orchestrator | "stretch_mode": false, 2025-06-02 20:28:10.103838 | orchestrator | "tiebreaker_mon": "", 2025-06-02 20:28:10.103851 | orchestrator | "removed_ranks: ": "", 2025-06-02 20:28:10.103862 | orchestrator | "features": { 2025-06-02 20:28:10.103872 | orchestrator | "persistent": [ 2025-06-02 20:28:10.103882 | orchestrator | "kraken", 2025-06-02 20:28:10.103893 | orchestrator | "luminous", 2025-06-02 20:28:10.103903 | orchestrator | "mimic", 2025-06-02 20:28:10.103915 | orchestrator | "osdmap-prune", 2025-06-02 20:28:10.103925 | orchestrator | "nautilus", 2025-06-02 20:28:10.103935 | orchestrator | "octopus", 2025-06-02 20:28:10.103946 | orchestrator | "pacific", 2025-06-02 20:28:10.103957 | orchestrator | "elector-pinging", 2025-06-02 20:28:10.103968 | orchestrator | "quincy", 2025-06-02 20:28:10.103977 | orchestrator | "reef" 2025-06-02 20:28:10.103987 | orchestrator | ], 2025-06-02 20:28:10.103997 | orchestrator | "optional": [] 2025-06-02 20:28:10.104007 | orchestrator | }, 2025-06-02 20:28:10.104018 | orchestrator | "mons": [ 2025-06-02 20:28:10.104030 | orchestrator | { 2025-06-02 20:28:10.104040 | orchestrator | "rank": 0, 2025-06-02 20:28:10.104051 | orchestrator | "name": "testbed-node-0", 2025-06-02 20:28:10.104063 | orchestrator | "public_addrs": { 2025-06-02 20:28:10.104075 | orchestrator | "addrvec": [ 2025-06-02 20:28:10.104086 | orchestrator | { 2025-06-02 20:28:10.104097 | orchestrator | "type": "v2", 2025-06-02 20:28:10.104108 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-02 20:28:10.104120 | orchestrator | "nonce": 0 2025-06-02 20:28:10.104130 | orchestrator | }, 2025-06-02 20:28:10.104141 | orchestrator | { 2025-06-02 20:28:10.104151 | orchestrator | "type": "v1", 2025-06-02 20:28:10.104162 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-02 20:28:10.104173 | orchestrator | "nonce": 0 2025-06-02 20:28:10.104185 | orchestrator | } 2025-06-02 20:28:10.104196 | orchestrator | ] 2025-06-02 20:28:10.104208 | orchestrator | }, 2025-06-02 20:28:10.104215 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-02 20:28:10.104248 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-02 20:28:10.104255 | orchestrator | "priority": 0, 2025-06-02 20:28:10.104262 | orchestrator | "weight": 0, 2025-06-02 20:28:10.104268 | orchestrator | "crush_location": "{}" 2025-06-02 20:28:10.104275 | orchestrator | }, 2025-06-02 20:28:10.104281 | orchestrator | { 2025-06-02 20:28:10.104288 | orchestrator | "rank": 1, 2025-06-02 20:28:10.104294 | orchestrator | "name": "testbed-node-1", 2025-06-02 20:28:10.104301 | orchestrator | "public_addrs": { 2025-06-02 20:28:10.104307 | orchestrator | "addrvec": [ 2025-06-02 20:28:10.104314 | orchestrator | { 2025-06-02 20:28:10.104321 | orchestrator | "type": "v2", 2025-06-02 20:28:10.104327 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-02 20:28:10.104334 | orchestrator | "nonce": 0 2025-06-02 20:28:10.104341 | orchestrator | }, 2025-06-02 20:28:10.104347 | orchestrator | { 2025-06-02 20:28:10.104354 | orchestrator | "type": "v1", 2025-06-02 20:28:10.104361 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-02 20:28:10.104367 | orchestrator | "nonce": 0 2025-06-02 20:28:10.104373 | orchestrator | } 2025-06-02 20:28:10.104380 | orchestrator | ] 2025-06-02 20:28:10.104387 | orchestrator | }, 2025-06-02 20:28:10.104393 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-02 20:28:10.104400 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-02 20:28:10.104406 | orchestrator | "priority": 0, 2025-06-02 20:28:10.104413 | orchestrator | "weight": 0, 2025-06-02 20:28:10.104419 | orchestrator | "crush_location": "{}" 2025-06-02 20:28:10.104425 | orchestrator | }, 2025-06-02 20:28:10.104432 | orchestrator | { 2025-06-02 20:28:10.104438 | orchestrator | "rank": 2, 2025-06-02 20:28:10.104445 | orchestrator | "name": "testbed-node-2", 2025-06-02 20:28:10.104451 | orchestrator | "public_addrs": { 2025-06-02 20:28:10.104458 | orchestrator | "addrvec": [ 2025-06-02 20:28:10.104464 | orchestrator | { 2025-06-02 20:28:10.104471 | orchestrator | "type": "v2", 2025-06-02 20:28:10.104477 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-02 20:28:10.104484 | orchestrator | "nonce": 0 2025-06-02 20:28:10.104490 | orchestrator | }, 2025-06-02 20:28:10.104496 | orchestrator | { 2025-06-02 20:28:10.104503 | orchestrator | "type": "v1", 2025-06-02 20:28:10.104510 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-02 20:28:10.104516 | orchestrator | "nonce": 0 2025-06-02 20:28:10.104522 | orchestrator | } 2025-06-02 20:28:10.104529 | orchestrator | ] 2025-06-02 20:28:10.104560 | orchestrator | }, 2025-06-02 20:28:10.104573 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-02 20:28:10.104591 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-02 20:28:10.104601 | orchestrator | "priority": 0, 2025-06-02 20:28:10.104612 | orchestrator | "weight": 0, 2025-06-02 20:28:10.104622 | orchestrator | "crush_location": "{}" 2025-06-02 20:28:10.104632 | orchestrator | } 2025-06-02 20:28:10.104642 | orchestrator | ] 2025-06-02 20:28:10.104653 | orchestrator | } 2025-06-02 20:28:10.104664 | orchestrator | } 2025-06-02 20:28:10.104675 | orchestrator | 2025-06-02 20:28:10.104687 | orchestrator | # Ceph free space status 2025-06-02 20:28:10.104698 | orchestrator | 2025-06-02 20:28:10.104709 | orchestrator | + echo 2025-06-02 20:28:10.104720 | orchestrator | + echo '# Ceph free space status' 2025-06-02 20:28:10.104730 | orchestrator | + echo 2025-06-02 20:28:10.104737 | orchestrator | + ceph df 2025-06-02 20:28:10.697710 | orchestrator | --- RAW STORAGE --- 2025-06-02 20:28:10.697818 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-02 20:28:10.697841 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-02 20:28:10.697853 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-02 20:28:10.697864 | orchestrator | 2025-06-02 20:28:10.697876 | orchestrator | --- POOLS --- 2025-06-02 20:28:10.697888 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-02 20:28:10.697900 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-06-02 20:28:10.697911 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-02 20:28:10.697922 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-02 20:28:10.697933 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-02 20:28:10.697944 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-02 20:28:10.697975 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-02 20:28:10.697987 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-02 20:28:10.697997 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-02 20:28:10.698008 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2025-06-02 20:28:10.698081 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 20:28:10.698096 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 20:28:10.698107 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2025-06-02 20:28:10.698119 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 20:28:10.698131 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 20:28:10.741911 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 20:28:10.779479 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 20:28:10.779656 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-02 20:28:10.779673 | orchestrator | + osism apply facts 2025-06-02 20:28:12.442154 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:28:12.442249 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:28:12.442262 | orchestrator | Registering Redlock._release_script 2025-06-02 20:28:12.500270 | orchestrator | 2025-06-02 20:28:12 | INFO  | Task 95c9f2ac-122d-4e00-900b-c0f4e0a44dae (facts) was prepared for execution. 2025-06-02 20:28:12.500666 | orchestrator | 2025-06-02 20:28:12 | INFO  | It takes a moment until task 95c9f2ac-122d-4e00-900b-c0f4e0a44dae (facts) has been started and output is visible here. 2025-06-02 20:28:16.617800 | orchestrator | 2025-06-02 20:28:16.621282 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 20:28:16.622752 | orchestrator | 2025-06-02 20:28:16.623205 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 20:28:16.624338 | orchestrator | Monday 02 June 2025 20:28:16 +0000 (0:00:00.265) 0:00:00.265 *********** 2025-06-02 20:28:18.107796 | orchestrator | ok: [testbed-manager] 2025-06-02 20:28:18.107889 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:18.108607 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:28:18.109139 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:28:18.109804 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:28:18.112972 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:28:18.113215 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:28:18.114153 | orchestrator | 2025-06-02 20:28:18.114900 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 20:28:18.115149 | orchestrator | Monday 02 June 2025 20:28:18 +0000 (0:00:01.485) 0:00:01.750 *********** 2025-06-02 20:28:18.269897 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:28:18.348138 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:18.428996 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:28:18.508164 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:28:18.585099 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:28:19.304316 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:28:19.304681 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:28:19.308957 | orchestrator | 2025-06-02 20:28:19.309036 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 20:28:19.309050 | orchestrator | 2025-06-02 20:28:19.309060 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 20:28:19.309073 | orchestrator | Monday 02 June 2025 20:28:19 +0000 (0:00:01.202) 0:00:02.953 *********** 2025-06-02 20:28:24.420847 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:28:24.424796 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:28:24.424879 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:24.425097 | orchestrator | ok: [testbed-manager] 2025-06-02 20:28:24.426115 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:28:24.426821 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:28:24.427244 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:28:24.428358 | orchestrator | 2025-06-02 20:28:24.429608 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 20:28:24.429958 | orchestrator | 2025-06-02 20:28:24.430591 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 20:28:24.431262 | orchestrator | Monday 02 June 2025 20:28:24 +0000 (0:00:05.117) 0:00:08.070 *********** 2025-06-02 20:28:24.599219 | orchestrator | skipping: [testbed-manager] 2025-06-02 20:28:24.696400 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:24.780008 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:28:24.862411 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:28:24.959964 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:28:24.996991 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:28:24.998408 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:28:25.000988 | orchestrator | 2025-06-02 20:28:25.002210 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:28:25.003894 | orchestrator | 2025-06-02 20:28:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 20:28:25.003940 | orchestrator | 2025-06-02 20:28:25 | INFO  | Please wait and do not abort execution. 2025-06-02 20:28:25.005313 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:28:25.006482 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:28:25.007234 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:28:25.008754 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:28:25.009673 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:28:25.010640 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:28:25.011510 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:28:25.012210 | orchestrator | 2025-06-02 20:28:25.012918 | orchestrator | 2025-06-02 20:28:25.013916 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:28:25.014839 | orchestrator | Monday 02 June 2025 20:28:24 +0000 (0:00:00.575) 0:00:08.646 *********** 2025-06-02 20:28:25.015630 | orchestrator | =============================================================================== 2025-06-02 20:28:25.016175 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.12s 2025-06-02 20:28:25.017381 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.49s 2025-06-02 20:28:25.017887 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2025-06-02 20:28:25.018916 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-06-02 20:28:25.652814 | orchestrator | + osism validate ceph-mons 2025-06-02 20:28:27.296350 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:28:27.296434 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:28:27.296444 | orchestrator | Registering Redlock._release_script 2025-06-02 20:28:46.741438 | orchestrator | 2025-06-02 20:28:46.741595 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-02 20:28:46.741608 | orchestrator | 2025-06-02 20:28:46.741615 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 20:28:46.741622 | orchestrator | Monday 02 June 2025 20:28:31 +0000 (0:00:00.440) 0:00:00.440 *********** 2025-06-02 20:28:46.741644 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:28:46.741651 | orchestrator | 2025-06-02 20:28:46.741657 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 20:28:46.741663 | orchestrator | Monday 02 June 2025 20:28:32 +0000 (0:00:00.607) 0:00:01.048 *********** 2025-06-02 20:28:46.741670 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:28:46.741676 | orchestrator | 2025-06-02 20:28:46.741682 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 20:28:46.741688 | orchestrator | Monday 02 June 2025 20:28:33 +0000 (0:00:00.816) 0:00:01.865 *********** 2025-06-02 20:28:46.741695 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.741703 | orchestrator | 2025-06-02 20:28:46.741709 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 20:28:46.741715 | orchestrator | Monday 02 June 2025 20:28:33 +0000 (0:00:00.237) 0:00:02.102 *********** 2025-06-02 20:28:46.741721 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.741729 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:28:46.741735 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:28:46.741741 | orchestrator | 2025-06-02 20:28:46.741747 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 20:28:46.741753 | orchestrator | Monday 02 June 2025 20:28:33 +0000 (0:00:00.295) 0:00:02.397 *********** 2025-06-02 20:28:46.741759 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:28:46.741766 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:28:46.741772 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.741778 | orchestrator | 2025-06-02 20:28:46.741784 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 20:28:46.741790 | orchestrator | Monday 02 June 2025 20:28:34 +0000 (0:00:01.021) 0:00:03.419 *********** 2025-06-02 20:28:46.741796 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.741802 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:28:46.741808 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:28:46.741815 | orchestrator | 2025-06-02 20:28:46.741821 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 20:28:46.741827 | orchestrator | Monday 02 June 2025 20:28:34 +0000 (0:00:00.302) 0:00:03.721 *********** 2025-06-02 20:28:46.741833 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.741839 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:28:46.741845 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:28:46.741851 | orchestrator | 2025-06-02 20:28:46.741857 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:28:46.741863 | orchestrator | Monday 02 June 2025 20:28:35 +0000 (0:00:00.490) 0:00:04.212 *********** 2025-06-02 20:28:46.741869 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.741875 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:28:46.741881 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:28:46.741887 | orchestrator | 2025-06-02 20:28:46.741893 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-02 20:28:46.741899 | orchestrator | Monday 02 June 2025 20:28:35 +0000 (0:00:00.306) 0:00:04.518 *********** 2025-06-02 20:28:46.741906 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.741912 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:28:46.741918 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:28:46.741924 | orchestrator | 2025-06-02 20:28:46.741930 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-02 20:28:46.741936 | orchestrator | Monday 02 June 2025 20:28:35 +0000 (0:00:00.281) 0:00:04.799 *********** 2025-06-02 20:28:46.741942 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.741948 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:28:46.741954 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:28:46.741960 | orchestrator | 2025-06-02 20:28:46.741967 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:28:46.741973 | orchestrator | Monday 02 June 2025 20:28:36 +0000 (0:00:00.294) 0:00:05.094 *********** 2025-06-02 20:28:46.741980 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.741993 | orchestrator | 2025-06-02 20:28:46.742000 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:28:46.742007 | orchestrator | Monday 02 June 2025 20:28:36 +0000 (0:00:00.636) 0:00:05.730 *********** 2025-06-02 20:28:46.742049 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.742057 | orchestrator | 2025-06-02 20:28:46.742064 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:28:46.742070 | orchestrator | Monday 02 June 2025 20:28:37 +0000 (0:00:00.252) 0:00:05.983 *********** 2025-06-02 20:28:46.742089 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.742096 | orchestrator | 2025-06-02 20:28:46.742102 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:28:46.742108 | orchestrator | Monday 02 June 2025 20:28:37 +0000 (0:00:00.238) 0:00:06.222 *********** 2025-06-02 20:28:46.742115 | orchestrator | 2025-06-02 20:28:46.742121 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:28:46.742127 | orchestrator | Monday 02 June 2025 20:28:37 +0000 (0:00:00.067) 0:00:06.289 *********** 2025-06-02 20:28:46.742133 | orchestrator | 2025-06-02 20:28:46.742139 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:28:46.742145 | orchestrator | Monday 02 June 2025 20:28:37 +0000 (0:00:00.067) 0:00:06.356 *********** 2025-06-02 20:28:46.742151 | orchestrator | 2025-06-02 20:28:46.742158 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:28:46.742164 | orchestrator | Monday 02 June 2025 20:28:37 +0000 (0:00:00.074) 0:00:06.431 *********** 2025-06-02 20:28:46.742170 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.742176 | orchestrator | 2025-06-02 20:28:46.742182 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 20:28:46.742188 | orchestrator | Monday 02 June 2025 20:28:37 +0000 (0:00:00.251) 0:00:06.683 *********** 2025-06-02 20:28:46.742194 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.742221 | orchestrator | 2025-06-02 20:28:46.742242 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-02 20:28:46.742249 | orchestrator | Monday 02 June 2025 20:28:38 +0000 (0:00:00.282) 0:00:06.966 *********** 2025-06-02 20:28:46.742255 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.742261 | orchestrator | 2025-06-02 20:28:46.742268 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-02 20:28:46.742273 | orchestrator | Monday 02 June 2025 20:28:38 +0000 (0:00:00.100) 0:00:07.066 *********** 2025-06-02 20:28:46.742280 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:28:46.742285 | orchestrator | 2025-06-02 20:28:46.742292 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-02 20:28:46.742298 | orchestrator | Monday 02 June 2025 20:28:39 +0000 (0:00:01.605) 0:00:08.672 *********** 2025-06-02 20:28:46.742304 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.742310 | orchestrator | 2025-06-02 20:28:46.742316 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-02 20:28:46.742322 | orchestrator | Monday 02 June 2025 20:28:40 +0000 (0:00:00.302) 0:00:08.974 *********** 2025-06-02 20:28:46.742328 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.742334 | orchestrator | 2025-06-02 20:28:46.742340 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-02 20:28:46.742350 | orchestrator | Monday 02 June 2025 20:28:40 +0000 (0:00:00.312) 0:00:09.287 *********** 2025-06-02 20:28:46.742356 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.742362 | orchestrator | 2025-06-02 20:28:46.742368 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-02 20:28:46.742374 | orchestrator | Monday 02 June 2025 20:28:40 +0000 (0:00:00.336) 0:00:09.623 *********** 2025-06-02 20:28:46.742380 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.742386 | orchestrator | 2025-06-02 20:28:46.742392 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-02 20:28:46.742398 | orchestrator | Monday 02 June 2025 20:28:41 +0000 (0:00:00.314) 0:00:09.938 *********** 2025-06-02 20:28:46.742446 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.742454 | orchestrator | 2025-06-02 20:28:46.742480 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-02 20:28:46.742522 | orchestrator | Monday 02 June 2025 20:28:41 +0000 (0:00:00.113) 0:00:10.052 *********** 2025-06-02 20:28:46.742532 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.742541 | orchestrator | 2025-06-02 20:28:46.742550 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-02 20:28:46.742559 | orchestrator | Monday 02 June 2025 20:28:41 +0000 (0:00:00.133) 0:00:10.185 *********** 2025-06-02 20:28:46.742568 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.742578 | orchestrator | 2025-06-02 20:28:46.742588 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-02 20:28:46.742597 | orchestrator | Monday 02 June 2025 20:28:41 +0000 (0:00:00.115) 0:00:10.301 *********** 2025-06-02 20:28:46.742608 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:28:46.742618 | orchestrator | 2025-06-02 20:28:46.742628 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-02 20:28:46.742638 | orchestrator | Monday 02 June 2025 20:28:42 +0000 (0:00:01.381) 0:00:11.682 *********** 2025-06-02 20:28:46.742649 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.742656 | orchestrator | 2025-06-02 20:28:46.742662 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-02 20:28:46.742672 | orchestrator | Monday 02 June 2025 20:28:43 +0000 (0:00:00.291) 0:00:11.973 *********** 2025-06-02 20:28:46.742682 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.742692 | orchestrator | 2025-06-02 20:28:46.742701 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-02 20:28:46.742711 | orchestrator | Monday 02 June 2025 20:28:43 +0000 (0:00:00.146) 0:00:12.120 *********** 2025-06-02 20:28:46.742721 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:28:46.742731 | orchestrator | 2025-06-02 20:28:46.742742 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-02 20:28:46.742752 | orchestrator | Monday 02 June 2025 20:28:43 +0000 (0:00:00.164) 0:00:12.284 *********** 2025-06-02 20:28:46.742763 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.742773 | orchestrator | 2025-06-02 20:28:46.742782 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-02 20:28:46.742788 | orchestrator | Monday 02 June 2025 20:28:43 +0000 (0:00:00.142) 0:00:12.427 *********** 2025-06-02 20:28:46.742794 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.742800 | orchestrator | 2025-06-02 20:28:46.742806 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 20:28:46.742812 | orchestrator | Monday 02 June 2025 20:28:43 +0000 (0:00:00.330) 0:00:12.757 *********** 2025-06-02 20:28:46.742818 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:28:46.742825 | orchestrator | 2025-06-02 20:28:46.742831 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 20:28:46.742837 | orchestrator | Monday 02 June 2025 20:28:44 +0000 (0:00:00.244) 0:00:13.001 *********** 2025-06-02 20:28:46.742843 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:28:46.742849 | orchestrator | 2025-06-02 20:28:46.742855 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:28:46.742861 | orchestrator | Monday 02 June 2025 20:28:44 +0000 (0:00:00.243) 0:00:13.245 *********** 2025-06-02 20:28:46.742867 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:28:46.742873 | orchestrator | 2025-06-02 20:28:46.742880 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:28:46.742886 | orchestrator | Monday 02 June 2025 20:28:45 +0000 (0:00:01.575) 0:00:14.821 *********** 2025-06-02 20:28:46.742892 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:28:46.742898 | orchestrator | 2025-06-02 20:28:46.742904 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:28:46.742917 | orchestrator | Monday 02 June 2025 20:28:46 +0000 (0:00:00.262) 0:00:15.083 *********** 2025-06-02 20:28:46.742924 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:28:46.742930 | orchestrator | 2025-06-02 20:28:46.742944 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:28:49.076106 | orchestrator | Monday 02 June 2025 20:28:46 +0000 (0:00:00.249) 0:00:15.333 *********** 2025-06-02 20:28:49.076242 | orchestrator | 2025-06-02 20:28:49.076266 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:28:49.076279 | orchestrator | Monday 02 June 2025 20:28:46 +0000 (0:00:00.096) 0:00:15.429 *********** 2025-06-02 20:28:49.076290 | orchestrator | 2025-06-02 20:28:49.076301 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:28:49.076312 | orchestrator | Monday 02 June 2025 20:28:46 +0000 (0:00:00.073) 0:00:15.503 *********** 2025-06-02 20:28:49.076323 | orchestrator | 2025-06-02 20:28:49.076334 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 20:28:49.076345 | orchestrator | Monday 02 June 2025 20:28:46 +0000 (0:00:00.073) 0:00:15.576 *********** 2025-06-02 20:28:49.076356 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:28:49.076367 | orchestrator | 2025-06-02 20:28:49.076377 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:28:49.076388 | orchestrator | Monday 02 June 2025 20:28:48 +0000 (0:00:01.491) 0:00:17.068 *********** 2025-06-02 20:28:49.076398 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 20:28:49.076429 | orchestrator |  "msg": [ 2025-06-02 20:28:49.076442 | orchestrator |  "Validator run completed.", 2025-06-02 20:28:49.076453 | orchestrator |  "You can find the report file here:", 2025-06-02 20:28:49.076464 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-02T20:28:32+00:00-report.json", 2025-06-02 20:28:49.076476 | orchestrator |  "on the following host:", 2025-06-02 20:28:49.076487 | orchestrator |  "testbed-manager" 2025-06-02 20:28:49.076566 | orchestrator |  ] 2025-06-02 20:28:49.076586 | orchestrator | } 2025-06-02 20:28:49.076606 | orchestrator | 2025-06-02 20:28:49.076630 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:28:49.076651 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-02 20:28:49.076671 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:28:49.076691 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:28:49.076709 | orchestrator | 2025-06-02 20:28:49.076725 | orchestrator | 2025-06-02 20:28:49.076741 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:28:49.076758 | orchestrator | Monday 02 June 2025 20:28:48 +0000 (0:00:00.566) 0:00:17.635 *********** 2025-06-02 20:28:49.076775 | orchestrator | =============================================================================== 2025-06-02 20:28:49.076793 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.61s 2025-06-02 20:28:49.076811 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2025-06-02 20:28:49.076830 | orchestrator | Write report file ------------------------------------------------------- 1.49s 2025-06-02 20:28:49.076850 | orchestrator | Gather status data ------------------------------------------------------ 1.38s 2025-06-02 20:28:49.076868 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2025-06-02 20:28:49.076886 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2025-06-02 20:28:49.076904 | orchestrator | Aggregate test results step one ----------------------------------------- 0.64s 2025-06-02 20:28:49.076950 | orchestrator | Get timestamp for report file ------------------------------------------- 0.61s 2025-06-02 20:28:49.076969 | orchestrator | Print report file information ------------------------------------------- 0.57s 2025-06-02 20:28:49.076986 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2025-06-02 20:28:49.077004 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2025-06-02 20:28:49.077020 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2025-06-02 20:28:49.077036 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2025-06-02 20:28:49.077054 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.31s 2025-06-02 20:28:49.077071 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-06-02 20:28:49.077087 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-06-02 20:28:49.077103 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2025-06-02 20:28:49.077118 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-06-02 20:28:49.077134 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.29s 2025-06-02 20:28:49.077149 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2025-06-02 20:28:49.319049 | orchestrator | + osism validate ceph-mgrs 2025-06-02 20:28:51.061248 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:28:51.061349 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:28:51.061362 | orchestrator | Registering Redlock._release_script 2025-06-02 20:29:09.938737 | orchestrator | 2025-06-02 20:29:09.938837 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-02 20:29:09.938853 | orchestrator | 2025-06-02 20:29:09.938864 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 20:29:09.938876 | orchestrator | Monday 02 June 2025 20:28:55 +0000 (0:00:00.434) 0:00:00.434 *********** 2025-06-02 20:29:09.938887 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:09.938899 | orchestrator | 2025-06-02 20:29:09.938910 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 20:29:09.938921 | orchestrator | Monday 02 June 2025 20:28:55 +0000 (0:00:00.651) 0:00:01.086 *********** 2025-06-02 20:29:09.938932 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:09.938943 | orchestrator | 2025-06-02 20:29:09.938954 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 20:29:09.938965 | orchestrator | Monday 02 June 2025 20:28:56 +0000 (0:00:00.812) 0:00:01.898 *********** 2025-06-02 20:29:09.938976 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:09.938988 | orchestrator | 2025-06-02 20:29:09.939000 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 20:29:09.939011 | orchestrator | Monday 02 June 2025 20:28:56 +0000 (0:00:00.225) 0:00:02.123 *********** 2025-06-02 20:29:09.939022 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:09.939033 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:29:09.939044 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:29:09.939055 | orchestrator | 2025-06-02 20:29:09.939067 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 20:29:09.939078 | orchestrator | Monday 02 June 2025 20:28:57 +0000 (0:00:00.292) 0:00:02.416 *********** 2025-06-02 20:29:09.939089 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:29:09.939100 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:09.939111 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:29:09.939122 | orchestrator | 2025-06-02 20:29:09.939133 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 20:29:09.939153 | orchestrator | Monday 02 June 2025 20:28:58 +0000 (0:00:00.989) 0:00:03.405 *********** 2025-06-02 20:29:09.939165 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:29:09.939176 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:29:09.939260 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:29:09.939275 | orchestrator | 2025-06-02 20:29:09.939289 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 20:29:09.939302 | orchestrator | Monday 02 June 2025 20:28:58 +0000 (0:00:00.277) 0:00:03.683 *********** 2025-06-02 20:29:09.939316 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:09.939330 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:29:09.939341 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:29:09.939353 | orchestrator | 2025-06-02 20:29:09.939364 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:29:09.939377 | orchestrator | Monday 02 June 2025 20:28:59 +0000 (0:00:00.496) 0:00:04.180 *********** 2025-06-02 20:29:09.939397 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:09.939414 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:29:09.939432 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:29:09.939452 | orchestrator | 2025-06-02 20:29:09.939497 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-02 20:29:09.939517 | orchestrator | Monday 02 June 2025 20:28:59 +0000 (0:00:00.363) 0:00:04.543 *********** 2025-06-02 20:29:09.939535 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:29:09.939549 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:29:09.939559 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:29:09.939570 | orchestrator | 2025-06-02 20:29:09.939581 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-02 20:29:09.939592 | orchestrator | Monday 02 June 2025 20:28:59 +0000 (0:00:00.287) 0:00:04.831 *********** 2025-06-02 20:29:09.939603 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:09.939613 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:29:09.939624 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:29:09.939635 | orchestrator | 2025-06-02 20:29:09.939645 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:29:09.939656 | orchestrator | Monday 02 June 2025 20:28:59 +0000 (0:00:00.299) 0:00:05.130 *********** 2025-06-02 20:29:09.939667 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:29:09.939677 | orchestrator | 2025-06-02 20:29:09.939688 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:29:09.939699 | orchestrator | Monday 02 June 2025 20:29:00 +0000 (0:00:00.648) 0:00:05.779 *********** 2025-06-02 20:29:09.939709 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:29:09.939720 | orchestrator | 2025-06-02 20:29:09.939730 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:29:09.939741 | orchestrator | Monday 02 June 2025 20:29:00 +0000 (0:00:00.254) 0:00:06.033 *********** 2025-06-02 20:29:09.939752 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:29:09.939762 | orchestrator | 2025-06-02 20:29:09.939773 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:09.939784 | orchestrator | Monday 02 June 2025 20:29:01 +0000 (0:00:00.273) 0:00:06.307 *********** 2025-06-02 20:29:09.939794 | orchestrator | 2025-06-02 20:29:09.939805 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:09.939816 | orchestrator | Monday 02 June 2025 20:29:01 +0000 (0:00:00.081) 0:00:06.389 *********** 2025-06-02 20:29:09.939827 | orchestrator | 2025-06-02 20:29:09.939837 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:09.939848 | orchestrator | Monday 02 June 2025 20:29:01 +0000 (0:00:00.079) 0:00:06.468 *********** 2025-06-02 20:29:09.939858 | orchestrator | 2025-06-02 20:29:09.939869 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:29:09.939880 | orchestrator | Monday 02 June 2025 20:29:01 +0000 (0:00:00.072) 0:00:06.541 *********** 2025-06-02 20:29:09.939890 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:29:09.939901 | orchestrator | 2025-06-02 20:29:09.939912 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 20:29:09.939923 | orchestrator | Monday 02 June 2025 20:29:01 +0000 (0:00:00.229) 0:00:06.770 *********** 2025-06-02 20:29:09.939934 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:29:09.939954 | orchestrator | 2025-06-02 20:29:09.939982 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-02 20:29:09.939994 | orchestrator | Monday 02 June 2025 20:29:01 +0000 (0:00:00.257) 0:00:07.028 *********** 2025-06-02 20:29:09.940012 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:09.940028 | orchestrator | 2025-06-02 20:29:09.940044 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-02 20:29:09.940061 | orchestrator | Monday 02 June 2025 20:29:02 +0000 (0:00:00.115) 0:00:07.144 *********** 2025-06-02 20:29:09.940079 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:29:09.940095 | orchestrator | 2025-06-02 20:29:09.940113 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-02 20:29:09.940130 | orchestrator | Monday 02 June 2025 20:29:04 +0000 (0:00:02.033) 0:00:09.177 *********** 2025-06-02 20:29:09.940148 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:09.940166 | orchestrator | 2025-06-02 20:29:09.940184 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-02 20:29:09.940202 | orchestrator | Monday 02 June 2025 20:29:04 +0000 (0:00:00.250) 0:00:09.427 *********** 2025-06-02 20:29:09.940220 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:09.940236 | orchestrator | 2025-06-02 20:29:09.940255 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-02 20:29:09.940273 | orchestrator | Monday 02 June 2025 20:29:05 +0000 (0:00:00.722) 0:00:10.150 *********** 2025-06-02 20:29:09.940289 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:29:09.940300 | orchestrator | 2025-06-02 20:29:09.940310 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-02 20:29:09.940321 | orchestrator | Monday 02 June 2025 20:29:05 +0000 (0:00:00.133) 0:00:10.283 *********** 2025-06-02 20:29:09.940331 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:29:09.940342 | orchestrator | 2025-06-02 20:29:09.940360 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 20:29:09.940371 | orchestrator | Monday 02 June 2025 20:29:05 +0000 (0:00:00.153) 0:00:10.437 *********** 2025-06-02 20:29:09.940382 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:09.940393 | orchestrator | 2025-06-02 20:29:09.940403 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 20:29:09.940414 | orchestrator | Monday 02 June 2025 20:29:05 +0000 (0:00:00.256) 0:00:10.694 *********** 2025-06-02 20:29:09.940426 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:29:09.940444 | orchestrator | 2025-06-02 20:29:09.940462 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:29:09.940510 | orchestrator | Monday 02 June 2025 20:29:05 +0000 (0:00:00.259) 0:00:10.953 *********** 2025-06-02 20:29:09.940526 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:09.940546 | orchestrator | 2025-06-02 20:29:09.940564 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:29:09.940583 | orchestrator | Monday 02 June 2025 20:29:07 +0000 (0:00:01.243) 0:00:12.197 *********** 2025-06-02 20:29:09.940602 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:09.940614 | orchestrator | 2025-06-02 20:29:09.940625 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:29:09.940635 | orchestrator | Monday 02 June 2025 20:29:07 +0000 (0:00:00.253) 0:00:12.450 *********** 2025-06-02 20:29:09.940646 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:09.940657 | orchestrator | 2025-06-02 20:29:09.940668 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:09.940678 | orchestrator | Monday 02 June 2025 20:29:07 +0000 (0:00:00.245) 0:00:12.696 *********** 2025-06-02 20:29:09.940689 | orchestrator | 2025-06-02 20:29:09.940700 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:09.940710 | orchestrator | Monday 02 June 2025 20:29:07 +0000 (0:00:00.071) 0:00:12.768 *********** 2025-06-02 20:29:09.940731 | orchestrator | 2025-06-02 20:29:09.940742 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:09.940752 | orchestrator | Monday 02 June 2025 20:29:07 +0000 (0:00:00.072) 0:00:12.840 *********** 2025-06-02 20:29:09.940763 | orchestrator | 2025-06-02 20:29:09.940773 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 20:29:09.940784 | orchestrator | Monday 02 June 2025 20:29:07 +0000 (0:00:00.074) 0:00:12.915 *********** 2025-06-02 20:29:09.940795 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:09.940805 | orchestrator | 2025-06-02 20:29:09.940816 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:29:09.940826 | orchestrator | Monday 02 June 2025 20:29:09 +0000 (0:00:01.717) 0:00:14.632 *********** 2025-06-02 20:29:09.940837 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 20:29:09.940848 | orchestrator |  "msg": [ 2025-06-02 20:29:09.940858 | orchestrator |  "Validator run completed.", 2025-06-02 20:29:09.940869 | orchestrator |  "You can find the report file here:", 2025-06-02 20:29:09.940880 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-02T20:28:55+00:00-report.json", 2025-06-02 20:29:09.940892 | orchestrator |  "on the following host:", 2025-06-02 20:29:09.940902 | orchestrator |  "testbed-manager" 2025-06-02 20:29:09.940913 | orchestrator |  ] 2025-06-02 20:29:09.940924 | orchestrator | } 2025-06-02 20:29:09.940935 | orchestrator | 2025-06-02 20:29:09.940946 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:29:09.940958 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 20:29:09.940971 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:29:09.940994 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:29:10.268705 | orchestrator | 2025-06-02 20:29:10.289681 | orchestrator | 2025-06-02 20:29:10.289772 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:29:10.289789 | orchestrator | Monday 02 June 2025 20:29:09 +0000 (0:00:00.426) 0:00:15.059 *********** 2025-06-02 20:29:10.289802 | orchestrator | =============================================================================== 2025-06-02 20:29:10.289816 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.03s 2025-06-02 20:29:10.289828 | orchestrator | Write report file ------------------------------------------------------- 1.72s 2025-06-02 20:29:10.289841 | orchestrator | Aggregate test results step one ----------------------------------------- 1.24s 2025-06-02 20:29:10.289853 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2025-06-02 20:29:10.289865 | orchestrator | Create report output directory ------------------------------------------ 0.81s 2025-06-02 20:29:10.289878 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.72s 2025-06-02 20:29:10.289890 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-06-02 20:29:10.289903 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2025-06-02 20:29:10.289916 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-06-02 20:29:10.289929 | orchestrator | Print report file information ------------------------------------------- 0.43s 2025-06-02 20:29:10.289941 | orchestrator | Prepare test data ------------------------------------------------------- 0.36s 2025-06-02 20:29:10.289953 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.30s 2025-06-02 20:29:10.289966 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-06-02 20:29:10.289978 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-06-02 20:29:10.290015 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-06-02 20:29:10.290113 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2025-06-02 20:29:10.290132 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2025-06-02 20:29:10.290151 | orchestrator | Fail due to missing containers ------------------------------------------ 0.26s 2025-06-02 20:29:10.290171 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2025-06-02 20:29:10.290189 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-06-02 20:29:10.534526 | orchestrator | + osism validate ceph-osds 2025-06-02 20:29:12.222374 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:29:12.222503 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:29:12.222527 | orchestrator | Registering Redlock._release_script 2025-06-02 20:29:20.853170 | orchestrator | 2025-06-02 20:29:20.853304 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-02 20:29:20.853335 | orchestrator | 2025-06-02 20:29:20.853352 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 20:29:20.853365 | orchestrator | Monday 02 June 2025 20:29:16 +0000 (0:00:00.474) 0:00:00.474 *********** 2025-06-02 20:29:20.853377 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:20.853388 | orchestrator | 2025-06-02 20:29:20.853404 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 20:29:20.853423 | orchestrator | Monday 02 June 2025 20:29:17 +0000 (0:00:00.607) 0:00:01.082 *********** 2025-06-02 20:29:20.853440 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:20.853553 | orchestrator | 2025-06-02 20:29:20.853578 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 20:29:20.853590 | orchestrator | Monday 02 June 2025 20:29:17 +0000 (0:00:00.397) 0:00:01.479 *********** 2025-06-02 20:29:20.853601 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:20.853619 | orchestrator | 2025-06-02 20:29:20.853637 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 20:29:20.853656 | orchestrator | Monday 02 June 2025 20:29:18 +0000 (0:00:00.908) 0:00:02.388 *********** 2025-06-02 20:29:20.853675 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:20.853697 | orchestrator | 2025-06-02 20:29:20.853716 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 20:29:20.853735 | orchestrator | Monday 02 June 2025 20:29:18 +0000 (0:00:00.129) 0:00:02.518 *********** 2025-06-02 20:29:20.853748 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:20.853761 | orchestrator | 2025-06-02 20:29:20.853773 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 20:29:20.853786 | orchestrator | Monday 02 June 2025 20:29:18 +0000 (0:00:00.134) 0:00:02.652 *********** 2025-06-02 20:29:20.853798 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:20.853812 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:29:20.853825 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:29:20.853838 | orchestrator | 2025-06-02 20:29:20.853850 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 20:29:20.853863 | orchestrator | Monday 02 June 2025 20:29:19 +0000 (0:00:00.294) 0:00:02.946 *********** 2025-06-02 20:29:20.853876 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:20.853888 | orchestrator | 2025-06-02 20:29:20.853900 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 20:29:20.853911 | orchestrator | Monday 02 June 2025 20:29:19 +0000 (0:00:00.146) 0:00:03.092 *********** 2025-06-02 20:29:20.853924 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:20.853936 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:20.853948 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:20.853960 | orchestrator | 2025-06-02 20:29:20.853972 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-02 20:29:20.854100 | orchestrator | Monday 02 June 2025 20:29:19 +0000 (0:00:00.319) 0:00:03.412 *********** 2025-06-02 20:29:20.854129 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:20.854145 | orchestrator | 2025-06-02 20:29:20.854161 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:29:20.854179 | orchestrator | Monday 02 June 2025 20:29:20 +0000 (0:00:00.551) 0:00:03.964 *********** 2025-06-02 20:29:20.854196 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:20.854213 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:20.854231 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:20.854248 | orchestrator | 2025-06-02 20:29:20.854265 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-02 20:29:20.854283 | orchestrator | Monday 02 June 2025 20:29:20 +0000 (0:00:00.479) 0:00:04.443 *********** 2025-06-02 20:29:20.854324 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9b6339481d81b79ef9d61df82bda4abc0301301cf2a40cdac0cb6449c381c65f', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-02 20:29:20.854346 | orchestrator | skipping: [testbed-node-3] => (item={'id': '66ea19c5e569e7bab97fe4238c2f101ed4ff58daeee40545b7d6f5f1e845f6f3', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-02 20:29:20.854370 | orchestrator | skipping: [testbed-node-3] => (item={'id': '92f7b0d3d1bd219a3e23f76da902c13687c9435e3372a92427306526f8971d9e', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-02 20:29:20.854392 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e28e40c571e17afe46d99765c3aba59d3bc71d260fa24ee0c06ecb77e3f208d4', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-02 20:29:20.854411 | orchestrator | skipping: [testbed-node-3] => (item={'id': '84f3ef0e22f959b4bd0cc4a463451e55fc8da735ad4779ae28b7e7d3eb8a2789', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-02 20:29:20.854493 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b504275edd22f883dc226393e1736df5ae0e31703c89e050dc822b1c255fb742', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-02 20:29:20.854529 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7a86050a953044d15ccd44f827acebcb4adb93763bbf39e21746c63319a7a9b8', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 20:29:20.854550 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9fccb9eacf4842112cb8362a7ba03bfdf5b80a7af0044726a7a40bdb336d5132', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 20:29:20.854569 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5ade709eeec852a0a2e4d5c2c1c09a64a52278e14e4d7c3408326ef3d2afc9da', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 20:29:20.854580 | orchestrator | skipping: [testbed-node-3] => (item={'id': '67a814443a6150f7ef5ce4b972e6e974348e4a025cd0b157dc5e7d685e41ff68', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 20:29:20.854592 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1791365144de82a86919856fb4413472f4b36034cf30a920811def8b27aca9d7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 20:29:20.854620 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8b86660fe89015b9ef0f759c3aeb4da0d22e6127396bda243488ea7854effbed', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-02 20:29:20.854648 | orchestrator | ok: [testbed-node-3] => (item={'id': '349c6fb0131cdd587b5d8cfeb206fd54d92704856faae0767a00ab29561f229c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 20:29:20.854672 | orchestrator | ok: [testbed-node-3] => (item={'id': '2ab77461eee066b7a9df4c5b836a04d912581c2c9a4bc22edb5c2e2db7261045', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 20:29:20.854691 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7fd0618e13baa20ae57e7d32c0737339a99868c048f837a63d14a4c73ab3c247', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-02 20:29:20.854708 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b5e15d6536b1c4e1c43111ba61c3c1ef51adc9419f6b32e0f138794a288ffc04', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 20:29:20.854728 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2325c4cc9dc35376051cb131a17872f6da2230d8d94da05c0baf819256a94112', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 20:29:20.854753 | orchestrator | skipping: [testbed-node-3] => (item={'id': '69214adfb69fe5457fe7149126c9410d721a22fb386b5f3c7866bcdc04437999', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 20:29:20.854773 | orchestrator | skipping: [testbed-node-3] => (item={'id': '300a333bee4a665f5bc24c776a5144be012f5668e860d7f985dbbb4e67d77719', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 20:29:20.854790 | orchestrator | skipping: [testbed-node-3] => (item={'id': '57d6cae49294761beb06d1612aff2a55c74db1ab96ba7a285bddbebaa3aa0b6c', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 20:29:20.854824 | orchestrator | skipping: [testbed-node-4] => (item={'id': '71a26fbe02f189d1a7592b269c6cd35e95f7af693141c8d1eeef239e8efbce96', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-02 20:29:21.014427 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a4b575cdb4859bf337347ee50f59c3371f24ba05060a31adfda3de87d24677c2', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-02 20:29:21.014588 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd098d77cc9e228fbcdbefca025c40aadc292ad9c26a1d4b1398059c957b53b25', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-02 20:29:21.014605 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1050d464431e2d57865e1f010ab5c5a05a724b04c64e4c011167728fbb09fc67', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-02 20:29:21.014639 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6dd751d52196de89e0e3d733376372c9a2d5b5730bc1dcbe9046923b39d43e39', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-02 20:29:21.014651 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b775bb4cd1e59758a2a7fe3fcfe0cdd1876968f573edb39e6786e95e5a85710d', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-02 20:29:21.014665 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8cc792809fee0d86316684f32be8c4d3261943aa4822f84986ddf14b55c8ef6d', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 20:29:21.014678 | orchestrator | skipping: [testbed-node-4] => (item={'id': '24fc2e1b07c9a37be7dce74747d6c4d4ef13092fd5b60d94cdc6c06c025769db', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 20:29:21.014690 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3ea2bc00fa814a2b5a71582a38ce88083c56c28a12e5d8e6c7d1f989bac3b67d', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 20:29:21.014701 | orchestrator | skipping: [testbed-node-4] => (item={'id': '02d95372452074bdbccfed9ce8662aafd506e1c316c0ce8ee98964c5a3aaf093', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 20:29:21.014713 | orchestrator | skipping: [testbed-node-4] => (item={'id': '54ffe1986d6a860d18e42e62d299f1d917eebdbe15519a2ccea49225a6c3f4d6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 20:29:21.014725 | orchestrator | skipping: [testbed-node-4] => (item={'id': '218d719b9fc4108531d0f5f0955ab81bd94540584a59958c138c4a1c750ae27c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-02 20:29:21.014751 | orchestrator | ok: [testbed-node-4] => (item={'id': '60c3cb647b15af18c050945649542d8ad1b57d2ebcb2086615ba96fbe37cb51e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 20:29:21.014764 | orchestrator | ok: [testbed-node-4] => (item={'id': '9f7167be1092a2011faa8cf866819bcd62c99e27b8f9407357d20a766958f2ec', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 20:29:21.014775 | orchestrator | skipping: [testbed-node-4] => (item={'id': '38de61d974f4cf08518b2eb84cfceea50012b010c69a2b1a3de7dfcacebce98a', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-02 20:29:21.014804 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7e8c39d0dd1bbc69f9fc3b4b15d2917cfc495cb716732b94ace5e09a40a2c354', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 20:29:21.014817 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ce3cd952c84080535106d230547c3ee31056d45f6ef5cdfe10aef162921c5346', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 20:29:21.014828 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c0ef7c3eb5bfd2ae194fed33fdf3f1288df12a6d445768fe3525fd0772541f1e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 20:29:21.014848 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ecf2617a4b42320fe9d7d32eacda5e08180646844dff76fd26ec2b7c9501e18f', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 20:29:21.014859 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b38654707a9a24dc2757e9d2305ad1ea91aeca35948a2a7b35b7781b7ed4c742', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-02 20:29:21.014871 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ae65d83f037ad37349dbef163c1404c8d4d3cfffea52e3b518529f2631176728', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-02 20:29:21.014882 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'de9a79baa0ec42ae699558ce2617e5bf4570bf6588607b437c8c578f9de42066', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-02 20:29:21.014893 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3a4f2bcff6903835cd9179f44cdeb53ad38986b458cd3c661d2df045b9c6a638', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-02 20:29:21.014904 | orchestrator | skipping: [testbed-node-5] => (item={'id': '988c7c22d345d570fbbeeb2e3df12e73d878e882dc03c6207160b8f6c6519679', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-02 20:29:21.014916 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b2596a2a73071cddde86c6fce9c2a2f1a3f391d518a70544a2b96a84f200e6a7', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-02 20:29:21.014927 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4946fd2de191792913ad7558b3b7973ee10728e521db88c8ac4b1821377a203c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-02 20:29:21.014938 | orchestrator | skipping: [testbed-node-5] => (item={'id': '58fd14687c31d89b07540de3afa4bfb2bfc4e696eca0593004209565f85c8cc8', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 20:29:21.014958 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5d975f6bf2f8fa2a96dc3e2efe6e159aba4f143aa9c99cae5cffa42ae4e01185', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 20:29:21.014972 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e82391a9ea7abe10c49e77e1fb0bde7a03dac4198af6224ade41d89e003538c5', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 20:29:21.014986 | orchestrator | skipping: [testbed-node-5] => (item={'id': '682d953f8c6abb145ca69eeafff77ff889cef7b7e1806b2e18e75ef19abe191a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 20:29:21.015006 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2a860b6ff7c7f2af30dbe27502a243aff95b5be022610b03118a49c8ccb6bf2b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 20:29:29.071114 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a4f286a692dbab0dadb27f7d02f58b330f41bc5752691dd6d6685da118afeaee', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-02 20:29:29.071225 | orchestrator | ok: [testbed-node-5] => (item={'id': 'f5b53c831143dee5295dcae5fa3b4aaa28b267a43c277a3a90524ac5efd38144', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 20:29:29.071244 | orchestrator | ok: [testbed-node-5] => (item={'id': '9d77e3ffd7e4178216d0f0fd06ac52a36b874c46312b0e48daacacf9461c0084', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 20:29:29.071258 | orchestrator | skipping: [testbed-node-5] => (item={'id': '29c71dcba84d0d35fc3eb89fdc1d9581b9d6248188001dcd737e44a2ceb68a52', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-02 20:29:29.071272 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f1fe06e0bd3cbb87f37b95010bcdd2adda0684e96e34406534e2645a4de25b7c', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 20:29:29.071288 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'af8b778cb96cd8df778ac7e0b5c0917d90f1bd821b363dc1ef0972b191909e73', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 20:29:29.071302 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a62bb63eb6d6e52ca343cd5d9bfe713c38ad890cb0fb2765959ab7f0d6ef1c90', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 20:29:29.071316 | orchestrator | skipping: [testbed-node-5] => (item={'id': '143fbc1c35ab4b154f6447516e1051951f5df2160b8e1f45a7fa6a2dfe3e5698', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 20:29:29.071331 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1d4414e77e28b90d45a0908eac00fd87f68c4319a1d301ac9508590628aca5bd', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 20:29:29.071346 | orchestrator | 2025-06-02 20:29:29.071362 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-02 20:29:29.071378 | orchestrator | Monday 02 June 2025 20:29:21 +0000 (0:00:00.490) 0:00:04.934 *********** 2025-06-02 20:29:29.071392 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:29.071406 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:29.071420 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:29.071433 | orchestrator | 2025-06-02 20:29:29.071447 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-02 20:29:29.071536 | orchestrator | Monday 02 June 2025 20:29:21 +0000 (0:00:00.275) 0:00:05.209 *********** 2025-06-02 20:29:29.071545 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:29.071554 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:29:29.071562 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:29:29.071570 | orchestrator | 2025-06-02 20:29:29.071593 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-02 20:29:29.071602 | orchestrator | Monday 02 June 2025 20:29:21 +0000 (0:00:00.434) 0:00:05.643 *********** 2025-06-02 20:29:29.071610 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:29.071618 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:29.071626 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:29.071653 | orchestrator | 2025-06-02 20:29:29.071663 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:29:29.071672 | orchestrator | Monday 02 June 2025 20:29:22 +0000 (0:00:00.366) 0:00:06.010 *********** 2025-06-02 20:29:29.071681 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:29.071691 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:29.071705 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:29.071720 | orchestrator | 2025-06-02 20:29:29.071735 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-02 20:29:29.071749 | orchestrator | Monday 02 June 2025 20:29:22 +0000 (0:00:00.269) 0:00:06.280 *********** 2025-06-02 20:29:29.071764 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-02 20:29:29.071780 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-02 20:29:29.071793 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:29.071806 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-02 20:29:29.071822 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-02 20:29:29.071858 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:29:29.071873 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-02 20:29:29.071887 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-02 20:29:29.071901 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:29:29.071917 | orchestrator | 2025-06-02 20:29:29.071931 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-02 20:29:29.071944 | orchestrator | Monday 02 June 2025 20:29:22 +0000 (0:00:00.307) 0:00:06.587 *********** 2025-06-02 20:29:29.071952 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:29.071960 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:29.071968 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:29.071976 | orchestrator | 2025-06-02 20:29:29.071983 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 20:29:29.071991 | orchestrator | Monday 02 June 2025 20:29:23 +0000 (0:00:00.477) 0:00:07.065 *********** 2025-06-02 20:29:29.071999 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:29.072007 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:29:29.072015 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:29:29.072022 | orchestrator | 2025-06-02 20:29:29.072030 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 20:29:29.072038 | orchestrator | Monday 02 June 2025 20:29:23 +0000 (0:00:00.289) 0:00:07.354 *********** 2025-06-02 20:29:29.072046 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:29.072054 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:29:29.072062 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:29:29.072069 | orchestrator | 2025-06-02 20:29:29.072077 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-02 20:29:29.072085 | orchestrator | Monday 02 June 2025 20:29:23 +0000 (0:00:00.304) 0:00:07.658 *********** 2025-06-02 20:29:29.072092 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:29.072100 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:29.072108 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:29.072116 | orchestrator | 2025-06-02 20:29:29.072123 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:29:29.072131 | orchestrator | Monday 02 June 2025 20:29:24 +0000 (0:00:00.292) 0:00:07.951 *********** 2025-06-02 20:29:29.072139 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:29.072146 | orchestrator | 2025-06-02 20:29:29.072154 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:29:29.072162 | orchestrator | Monday 02 June 2025 20:29:24 +0000 (0:00:00.639) 0:00:08.590 *********** 2025-06-02 20:29:29.072169 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:29.072186 | orchestrator | 2025-06-02 20:29:29.072194 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:29:29.072201 | orchestrator | Monday 02 June 2025 20:29:24 +0000 (0:00:00.258) 0:00:08.849 *********** 2025-06-02 20:29:29.072209 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:29.072216 | orchestrator | 2025-06-02 20:29:29.072224 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:29.072234 | orchestrator | Monday 02 June 2025 20:29:25 +0000 (0:00:00.242) 0:00:09.092 *********** 2025-06-02 20:29:29.072247 | orchestrator | 2025-06-02 20:29:29.072258 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:29.072272 | orchestrator | Monday 02 June 2025 20:29:25 +0000 (0:00:00.066) 0:00:09.158 *********** 2025-06-02 20:29:29.072286 | orchestrator | 2025-06-02 20:29:29.072299 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:29.072312 | orchestrator | Monday 02 June 2025 20:29:25 +0000 (0:00:00.066) 0:00:09.225 *********** 2025-06-02 20:29:29.072322 | orchestrator | 2025-06-02 20:29:29.072330 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:29:29.072342 | orchestrator | Monday 02 June 2025 20:29:25 +0000 (0:00:00.073) 0:00:09.298 *********** 2025-06-02 20:29:29.072355 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:29.072368 | orchestrator | 2025-06-02 20:29:29.072381 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-02 20:29:29.072395 | orchestrator | Monday 02 June 2025 20:29:25 +0000 (0:00:00.237) 0:00:09.536 *********** 2025-06-02 20:29:29.072408 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:29.072421 | orchestrator | 2025-06-02 20:29:29.072434 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:29:29.072448 | orchestrator | Monday 02 June 2025 20:29:25 +0000 (0:00:00.240) 0:00:09.777 *********** 2025-06-02 20:29:29.072490 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:29.072503 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:29.072517 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:29.072530 | orchestrator | 2025-06-02 20:29:29.072543 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-02 20:29:29.072555 | orchestrator | Monday 02 June 2025 20:29:26 +0000 (0:00:00.290) 0:00:10.067 *********** 2025-06-02 20:29:29.072567 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:29.072587 | orchestrator | 2025-06-02 20:29:29.072602 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-02 20:29:29.072615 | orchestrator | Monday 02 June 2025 20:29:26 +0000 (0:00:00.609) 0:00:10.677 *********** 2025-06-02 20:29:29.072629 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 20:29:29.072641 | orchestrator | 2025-06-02 20:29:29.072652 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-02 20:29:29.072663 | orchestrator | Monday 02 June 2025 20:29:28 +0000 (0:00:01.674) 0:00:12.351 *********** 2025-06-02 20:29:29.072675 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:29.072687 | orchestrator | 2025-06-02 20:29:29.072699 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-02 20:29:29.072710 | orchestrator | Monday 02 June 2025 20:29:28 +0000 (0:00:00.150) 0:00:12.501 *********** 2025-06-02 20:29:29.072722 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:29.072735 | orchestrator | 2025-06-02 20:29:29.072747 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-02 20:29:29.072759 | orchestrator | Monday 02 June 2025 20:29:28 +0000 (0:00:00.299) 0:00:12.800 *********** 2025-06-02 20:29:29.072782 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:41.622903 | orchestrator | 2025-06-02 20:29:41.624159 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-02 20:29:41.624235 | orchestrator | Monday 02 June 2025 20:29:29 +0000 (0:00:00.116) 0:00:12.916 *********** 2025-06-02 20:29:41.624251 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:41.624266 | orchestrator | 2025-06-02 20:29:41.624310 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:29:41.624322 | orchestrator | Monday 02 June 2025 20:29:29 +0000 (0:00:00.140) 0:00:13.057 *********** 2025-06-02 20:29:41.624334 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:41.624347 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:41.624360 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:41.624373 | orchestrator | 2025-06-02 20:29:41.624386 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-02 20:29:41.624400 | orchestrator | Monday 02 June 2025 20:29:29 +0000 (0:00:00.329) 0:00:13.386 *********** 2025-06-02 20:29:41.624414 | orchestrator | changed: [testbed-node-4] 2025-06-02 20:29:41.624429 | orchestrator | changed: [testbed-node-3] 2025-06-02 20:29:41.624465 | orchestrator | changed: [testbed-node-5] 2025-06-02 20:29:41.624479 | orchestrator | 2025-06-02 20:29:41.624492 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-02 20:29:41.624505 | orchestrator | Monday 02 June 2025 20:29:32 +0000 (0:00:02.614) 0:00:16.001 *********** 2025-06-02 20:29:41.624519 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:41.624532 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:41.624545 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:41.624559 | orchestrator | 2025-06-02 20:29:41.624572 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-02 20:29:41.624587 | orchestrator | Monday 02 June 2025 20:29:32 +0000 (0:00:00.308) 0:00:16.310 *********** 2025-06-02 20:29:41.624600 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:41.624614 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:41.624627 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:41.624641 | orchestrator | 2025-06-02 20:29:41.624654 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-02 20:29:41.624668 | orchestrator | Monday 02 June 2025 20:29:32 +0000 (0:00:00.531) 0:00:16.841 *********** 2025-06-02 20:29:41.624682 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:41.624695 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:29:41.624709 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:29:41.624722 | orchestrator | 2025-06-02 20:29:41.624736 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-02 20:29:41.624750 | orchestrator | Monday 02 June 2025 20:29:33 +0000 (0:00:00.300) 0:00:17.141 *********** 2025-06-02 20:29:41.624763 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:41.624776 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:41.624789 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:41.624802 | orchestrator | 2025-06-02 20:29:41.624816 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-02 20:29:41.624830 | orchestrator | Monday 02 June 2025 20:29:33 +0000 (0:00:00.508) 0:00:17.650 *********** 2025-06-02 20:29:41.624843 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:41.624857 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:29:41.624871 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:29:41.624884 | orchestrator | 2025-06-02 20:29:41.624898 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-02 20:29:41.624911 | orchestrator | Monday 02 June 2025 20:29:34 +0000 (0:00:00.301) 0:00:17.952 *********** 2025-06-02 20:29:41.624925 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:41.624939 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:29:41.624952 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:29:41.624966 | orchestrator | 2025-06-02 20:29:41.624978 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 20:29:41.624991 | orchestrator | Monday 02 June 2025 20:29:34 +0000 (0:00:00.282) 0:00:18.234 *********** 2025-06-02 20:29:41.625005 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:41.625017 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:41.625028 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:41.625040 | orchestrator | 2025-06-02 20:29:41.625053 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-02 20:29:41.625079 | orchestrator | Monday 02 June 2025 20:29:34 +0000 (0:00:00.475) 0:00:18.709 *********** 2025-06-02 20:29:41.625094 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:41.625108 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:41.625121 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:41.625204 | orchestrator | 2025-06-02 20:29:41.625221 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-02 20:29:41.625236 | orchestrator | Monday 02 June 2025 20:29:35 +0000 (0:00:00.755) 0:00:19.465 *********** 2025-06-02 20:29:41.625250 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:41.625264 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:41.625278 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:41.625292 | orchestrator | 2025-06-02 20:29:41.625305 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-02 20:29:41.625320 | orchestrator | Monday 02 June 2025 20:29:35 +0000 (0:00:00.295) 0:00:19.761 *********** 2025-06-02 20:29:41.625334 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:41.625347 | orchestrator | skipping: [testbed-node-4] 2025-06-02 20:29:41.625362 | orchestrator | skipping: [testbed-node-5] 2025-06-02 20:29:41.625376 | orchestrator | 2025-06-02 20:29:41.625390 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-02 20:29:41.625404 | orchestrator | Monday 02 June 2025 20:29:36 +0000 (0:00:00.298) 0:00:20.059 *********** 2025-06-02 20:29:41.625418 | orchestrator | ok: [testbed-node-3] 2025-06-02 20:29:41.625431 | orchestrator | ok: [testbed-node-4] 2025-06-02 20:29:41.625461 | orchestrator | ok: [testbed-node-5] 2025-06-02 20:29:41.625475 | orchestrator | 2025-06-02 20:29:41.625489 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 20:29:41.625502 | orchestrator | Monday 02 June 2025 20:29:36 +0000 (0:00:00.308) 0:00:20.368 *********** 2025-06-02 20:29:41.625516 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:41.625573 | orchestrator | 2025-06-02 20:29:41.625588 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 20:29:41.625601 | orchestrator | Monday 02 June 2025 20:29:37 +0000 (0:00:00.674) 0:00:21.042 *********** 2025-06-02 20:29:41.625614 | orchestrator | skipping: [testbed-node-3] 2025-06-02 20:29:41.625629 | orchestrator | 2025-06-02 20:29:41.625678 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 20:29:41.625694 | orchestrator | Monday 02 June 2025 20:29:37 +0000 (0:00:00.247) 0:00:21.290 *********** 2025-06-02 20:29:41.625708 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:41.625721 | orchestrator | 2025-06-02 20:29:41.625735 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 20:29:41.625749 | orchestrator | Monday 02 June 2025 20:29:39 +0000 (0:00:01.607) 0:00:22.897 *********** 2025-06-02 20:29:41.625762 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:41.625776 | orchestrator | 2025-06-02 20:29:41.625790 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 20:29:41.625803 | orchestrator | Monday 02 June 2025 20:29:39 +0000 (0:00:00.290) 0:00:23.188 *********** 2025-06-02 20:29:41.625817 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:41.625831 | orchestrator | 2025-06-02 20:29:41.625844 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:41.625858 | orchestrator | Monday 02 June 2025 20:29:39 +0000 (0:00:00.273) 0:00:23.461 *********** 2025-06-02 20:29:41.625871 | orchestrator | 2025-06-02 20:29:41.625885 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:41.625898 | orchestrator | Monday 02 June 2025 20:29:39 +0000 (0:00:00.079) 0:00:23.540 *********** 2025-06-02 20:29:41.625912 | orchestrator | 2025-06-02 20:29:41.625925 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 20:29:41.625939 | orchestrator | Monday 02 June 2025 20:29:39 +0000 (0:00:00.074) 0:00:23.615 *********** 2025-06-02 20:29:41.625953 | orchestrator | 2025-06-02 20:29:41.625966 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 20:29:41.625991 | orchestrator | Monday 02 June 2025 20:29:39 +0000 (0:00:00.076) 0:00:23.691 *********** 2025-06-02 20:29:41.626115 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 20:29:41.626136 | orchestrator | 2025-06-02 20:29:41.626151 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 20:29:41.626165 | orchestrator | Monday 02 June 2025 20:29:41 +0000 (0:00:01.201) 0:00:24.893 *********** 2025-06-02 20:29:41.626179 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-02 20:29:41.626193 | orchestrator |  "msg": [ 2025-06-02 20:29:41.626207 | orchestrator |  "Validator run completed.", 2025-06-02 20:29:41.626221 | orchestrator |  "You can find the report file here:", 2025-06-02 20:29:41.626235 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-02T20:29:17+00:00-report.json", 2025-06-02 20:29:41.626250 | orchestrator |  "on the following host:", 2025-06-02 20:29:41.626263 | orchestrator |  "testbed-manager" 2025-06-02 20:29:41.626277 | orchestrator |  ] 2025-06-02 20:29:41.626290 | orchestrator | } 2025-06-02 20:29:41.626304 | orchestrator | 2025-06-02 20:29:41.626317 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:29:41.626332 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-02 20:29:41.626422 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 20:29:41.626490 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 20:29:41.626504 | orchestrator | 2025-06-02 20:29:41.626516 | orchestrator | 2025-06-02 20:29:41.626529 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:29:41.626542 | orchestrator | Monday 02 June 2025 20:29:41 +0000 (0:00:00.552) 0:00:25.445 *********** 2025-06-02 20:29:41.626556 | orchestrator | =============================================================================== 2025-06-02 20:29:41.626567 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.61s 2025-06-02 20:29:41.626578 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.67s 2025-06-02 20:29:41.626597 | orchestrator | Aggregate test results step one ----------------------------------------- 1.61s 2025-06-02 20:29:41.626609 | orchestrator | Write report file ------------------------------------------------------- 1.20s 2025-06-02 20:29:41.626620 | orchestrator | Create report output directory ------------------------------------------ 0.91s 2025-06-02 20:29:41.626632 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.76s 2025-06-02 20:29:41.626644 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.67s 2025-06-02 20:29:41.626655 | orchestrator | Aggregate test results step one ----------------------------------------- 0.64s 2025-06-02 20:29:41.626667 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.61s 2025-06-02 20:29:41.626678 | orchestrator | Get timestamp for report file ------------------------------------------- 0.61s 2025-06-02 20:29:41.626689 | orchestrator | Print report file information ------------------------------------------- 0.55s 2025-06-02 20:29:41.626699 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.55s 2025-06-02 20:29:41.626711 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.53s 2025-06-02 20:29:41.626722 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.51s 2025-06-02 20:29:41.626733 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.49s 2025-06-02 20:29:41.626743 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-06-02 20:29:41.626771 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.48s 2025-06-02 20:29:41.893082 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-06-02 20:29:41.893183 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.43s 2025-06-02 20:29:41.893194 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.40s 2025-06-02 20:29:42.147765 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-02 20:29:42.155722 | orchestrator | + set -e 2025-06-02 20:29:42.155821 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 20:29:42.155838 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 20:29:42.155850 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 20:29:42.155861 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 20:29:42.155872 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 20:29:42.155883 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 20:29:42.155895 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 20:29:42.155906 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 20:29:42.155917 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 20:29:42.155928 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 20:29:42.155939 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 20:29:42.155950 | orchestrator | ++ export ARA=false 2025-06-02 20:29:42.155961 | orchestrator | ++ ARA=false 2025-06-02 20:29:42.155972 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 20:29:42.155983 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 20:29:42.155993 | orchestrator | ++ export TEMPEST=false 2025-06-02 20:29:42.156004 | orchestrator | ++ TEMPEST=false 2025-06-02 20:29:42.156015 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 20:29:42.156025 | orchestrator | ++ IS_ZUUL=true 2025-06-02 20:29:42.156036 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2025-06-02 20:29:42.156047 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.191 2025-06-02 20:29:42.156058 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 20:29:42.156068 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 20:29:42.156079 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 20:29:42.156089 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 20:29:42.156100 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 20:29:42.156110 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 20:29:42.156121 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 20:29:42.156131 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 20:29:42.156143 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 20:29:42.156153 | orchestrator | + source /etc/os-release 2025-06-02 20:29:42.156164 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-02 20:29:42.156174 | orchestrator | ++ NAME=Ubuntu 2025-06-02 20:29:42.156185 | orchestrator | ++ VERSION_ID=24.04 2025-06-02 20:29:42.156195 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-02 20:29:42.156206 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-02 20:29:42.156216 | orchestrator | ++ ID=ubuntu 2025-06-02 20:29:42.156227 | orchestrator | ++ ID_LIKE=debian 2025-06-02 20:29:42.156239 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-02 20:29:42.156257 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-02 20:29:42.156285 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-02 20:29:42.156308 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-02 20:29:42.156327 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-02 20:29:42.156345 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-02 20:29:42.156363 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-02 20:29:42.156379 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-02 20:29:42.156398 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 20:29:42.192527 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 20:30:04.248738 | orchestrator | 2025-06-02 20:30:04.248831 | orchestrator | # Status of Elasticsearch 2025-06-02 20:30:04.248842 | orchestrator | 2025-06-02 20:30:04.248850 | orchestrator | + pushd /opt/configuration/contrib 2025-06-02 20:30:04.248859 | orchestrator | + echo 2025-06-02 20:30:04.248866 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-02 20:30:04.248873 | orchestrator | + echo 2025-06-02 20:30:04.248880 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-02 20:30:04.446165 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-02 20:30:04.446276 | orchestrator | 2025-06-02 20:30:04.446287 | orchestrator | + echo 2025-06-02 20:30:04.446571 | orchestrator | # Status of MariaDB 2025-06-02 20:30:04.446583 | orchestrator | 2025-06-02 20:30:04.446588 | orchestrator | + echo '# Status of MariaDB' 2025-06-02 20:30:04.446593 | orchestrator | + echo 2025-06-02 20:30:04.446597 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-02 20:30:04.446602 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-02 20:30:04.523737 | orchestrator | Reading package lists... 2025-06-02 20:30:04.827390 | orchestrator | Building dependency tree... 2025-06-02 20:30:04.827817 | orchestrator | Reading state information... 2025-06-02 20:30:05.175152 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-02 20:30:05.175243 | orchestrator | bc set to manually installed. 2025-06-02 20:30:05.175250 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-02 20:30:05.834819 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-02 20:30:05.835033 | orchestrator | 2025-06-02 20:30:05.835044 | orchestrator | + echo 2025-06-02 20:30:05.835050 | orchestrator | + echo '# Status of Prometheus' 2025-06-02 20:30:05.835056 | orchestrator | # Status of Prometheus 2025-06-02 20:30:05.835061 | orchestrator | 2025-06-02 20:30:05.835066 | orchestrator | + echo 2025-06-02 20:30:05.835072 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-02 20:30:05.896908 | orchestrator | Unauthorized 2025-06-02 20:30:05.900048 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-02 20:30:05.958718 | orchestrator | Unauthorized 2025-06-02 20:30:05.961485 | orchestrator | 2025-06-02 20:30:05.961533 | orchestrator | # Status of RabbitMQ 2025-06-02 20:30:05.961547 | orchestrator | 2025-06-02 20:30:05.961558 | orchestrator | + echo 2025-06-02 20:30:05.961570 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-02 20:30:05.961581 | orchestrator | + echo 2025-06-02 20:30:05.961593 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-02 20:30:06.366851 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-02 20:30:06.375561 | orchestrator | 2025-06-02 20:30:06.375635 | orchestrator | # Status of Redis 2025-06-02 20:30:06.375645 | orchestrator | 2025-06-02 20:30:06.375652 | orchestrator | + echo 2025-06-02 20:30:06.375661 | orchestrator | + echo '# Status of Redis' 2025-06-02 20:30:06.375669 | orchestrator | + echo 2025-06-02 20:30:06.375677 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-02 20:30:06.383090 | orchestrator | TCP OK - 0.005 second response time on 192.168.16.10 port 6379|time=0.004798s;;;0.000000;10.000000 2025-06-02 20:30:06.383340 | orchestrator | + popd 2025-06-02 20:30:06.383619 | orchestrator | 2025-06-02 20:30:06.383635 | orchestrator | + echo 2025-06-02 20:30:06.383643 | orchestrator | # Create backup of MariaDB database 2025-06-02 20:30:06.383652 | orchestrator | 2025-06-02 20:30:06.383660 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-02 20:30:06.383667 | orchestrator | + echo 2025-06-02 20:30:06.383675 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-02 20:30:08.103855 | orchestrator | 2025-06-02 20:30:08 | INFO  | Task c3bfe0a7-a20c-4089-82fd-90ea8eb29b6f (mariadb_backup) was prepared for execution. 2025-06-02 20:30:08.103926 | orchestrator | 2025-06-02 20:30:08 | INFO  | It takes a moment until task c3bfe0a7-a20c-4089-82fd-90ea8eb29b6f (mariadb_backup) has been started and output is visible here. 2025-06-02 20:30:12.013645 | orchestrator | 2025-06-02 20:30:12.016696 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 20:30:12.017738 | orchestrator | 2025-06-02 20:30:12.017975 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 20:30:12.019118 | orchestrator | Monday 02 June 2025 20:30:12 +0000 (0:00:00.178) 0:00:00.178 *********** 2025-06-02 20:30:12.199673 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:12.335159 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:12.336489 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:12.337322 | orchestrator | 2025-06-02 20:30:12.338698 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 20:30:12.339620 | orchestrator | Monday 02 June 2025 20:30:12 +0000 (0:00:00.326) 0:00:00.505 *********** 2025-06-02 20:30:12.899262 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 20:30:12.899515 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 20:30:12.899535 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 20:30:12.899559 | orchestrator | 2025-06-02 20:30:12.899886 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 20:30:12.900326 | orchestrator | 2025-06-02 20:30:12.901641 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 20:30:12.901964 | orchestrator | Monday 02 June 2025 20:30:12 +0000 (0:00:00.559) 0:00:01.065 *********** 2025-06-02 20:30:13.291188 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 20:30:13.291922 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 20:30:13.293598 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 20:30:13.293649 | orchestrator | 2025-06-02 20:30:13.294723 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 20:30:13.295942 | orchestrator | Monday 02 June 2025 20:30:13 +0000 (0:00:00.394) 0:00:01.459 *********** 2025-06-02 20:30:13.816062 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 20:30:13.817184 | orchestrator | 2025-06-02 20:30:13.817861 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-02 20:30:13.821702 | orchestrator | Monday 02 June 2025 20:30:13 +0000 (0:00:00.524) 0:00:01.984 *********** 2025-06-02 20:30:16.960952 | orchestrator | ok: [testbed-node-1] 2025-06-02 20:30:16.961066 | orchestrator | ok: [testbed-node-0] 2025-06-02 20:30:16.964730 | orchestrator | ok: [testbed-node-2] 2025-06-02 20:30:16.966121 | orchestrator | 2025-06-02 20:30:16.967818 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-02 20:30:16.968368 | orchestrator | Monday 02 June 2025 20:30:16 +0000 (0:00:03.142) 0:00:05.126 *********** 2025-06-02 20:31:56.183628 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 20:31:56.183753 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-02 20:31:56.183763 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 20:31:56.183776 | orchestrator | mariadb_bootstrap_restart 2025-06-02 20:31:56.255839 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:31:56.256742 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:31:56.262528 | orchestrator | changed: [testbed-node-0] 2025-06-02 20:31:56.262602 | orchestrator | 2025-06-02 20:31:56.262625 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 20:31:56.262644 | orchestrator | skipping: no hosts matched 2025-06-02 20:31:56.262674 | orchestrator | 2025-06-02 20:31:56.263051 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 20:31:56.264288 | orchestrator | skipping: no hosts matched 2025-06-02 20:31:56.265127 | orchestrator | 2025-06-02 20:31:56.265812 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 20:31:56.266495 | orchestrator | skipping: no hosts matched 2025-06-02 20:31:56.267401 | orchestrator | 2025-06-02 20:31:56.268286 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 20:31:56.268798 | orchestrator | 2025-06-02 20:31:56.269174 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 20:31:56.269738 | orchestrator | Monday 02 June 2025 20:31:56 +0000 (0:01:39.299) 0:01:44.426 *********** 2025-06-02 20:31:56.430197 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:31:56.541551 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:31:56.542745 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:31:56.549437 | orchestrator | 2025-06-02 20:31:56.549515 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 20:31:56.549530 | orchestrator | Monday 02 June 2025 20:31:56 +0000 (0:00:00.285) 0:01:44.712 *********** 2025-06-02 20:31:56.872734 | orchestrator | skipping: [testbed-node-0] 2025-06-02 20:31:56.910426 | orchestrator | skipping: [testbed-node-1] 2025-06-02 20:31:56.910941 | orchestrator | skipping: [testbed-node-2] 2025-06-02 20:31:56.912232 | orchestrator | 2025-06-02 20:31:56.913308 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:31:56.913854 | orchestrator | 2025-06-02 20:31:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 20:31:56.914155 | orchestrator | 2025-06-02 20:31:56 | INFO  | Please wait and do not abort execution. 2025-06-02 20:31:56.914982 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 20:31:56.915695 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 20:31:56.916152 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 20:31:56.916579 | orchestrator | 2025-06-02 20:31:56.917057 | orchestrator | 2025-06-02 20:31:56.917664 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:31:56.918528 | orchestrator | Monday 02 June 2025 20:31:56 +0000 (0:00:00.369) 0:01:45.082 *********** 2025-06-02 20:31:56.918597 | orchestrator | =============================================================================== 2025-06-02 20:31:56.918991 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 99.30s 2025-06-02 20:31:56.919437 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.14s 2025-06-02 20:31:56.919665 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2025-06-02 20:31:56.920112 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.52s 2025-06-02 20:31:56.920585 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-06-02 20:31:56.920993 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.37s 2025-06-02 20:31:56.921454 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-02 20:31:56.921831 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2025-06-02 20:31:57.423165 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-02 20:31:57.432207 | orchestrator | + set -e 2025-06-02 20:31:57.432273 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 20:31:57.432280 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 20:31:57.432287 | orchestrator | ++ INTERACTIVE=false 2025-06-02 20:31:57.432292 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 20:31:57.432297 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 20:31:57.432307 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 20:31:57.433204 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 20:31:57.436987 | orchestrator | 2025-06-02 20:31:57.437029 | orchestrator | # OpenStack endpoints 2025-06-02 20:31:57.437038 | orchestrator | 2025-06-02 20:31:57.437045 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 20:31:57.437052 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 20:31:57.437060 | orchestrator | + export OS_CLOUD=admin 2025-06-02 20:31:57.437067 | orchestrator | + OS_CLOUD=admin 2025-06-02 20:31:57.437074 | orchestrator | + echo 2025-06-02 20:31:57.437082 | orchestrator | + echo '# OpenStack endpoints' 2025-06-02 20:31:57.437089 | orchestrator | + echo 2025-06-02 20:31:57.437096 | orchestrator | + openstack endpoint list 2025-06-02 20:32:00.744995 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 20:32:00.745089 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-02 20:32:00.745117 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 20:32:00.745125 | orchestrator | | 16c70de335704926b8693adb59713a51 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-02 20:32:00.745132 | orchestrator | | 2824410740fc4c58909919512a2a9e0d | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-02 20:32:00.745140 | orchestrator | | 2ac9c32afc374b8ea117838820aaf850 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-02 20:32:00.745147 | orchestrator | | 3abf1956078145b6b845e618da34caff | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-02 20:32:00.745155 | orchestrator | | 412e3324669b4ea6a01d3cd62e08d25e | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-02 20:32:00.745162 | orchestrator | | 4627cb3ea54a4040804348a9b0441237 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-02 20:32:00.745169 | orchestrator | | 503c41dc6e024afb8221a69b19eabaaa | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-02 20:32:00.745177 | orchestrator | | 589efaa4238146329f75fbf14dd98043 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-02 20:32:00.745184 | orchestrator | | 5efb25a12bef44edae00bcf217b430dd | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-02 20:32:00.745191 | orchestrator | | 66be088564664f75a8d0959e4fc119e4 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-02 20:32:00.745199 | orchestrator | | 7429608d87fa4a709aa340b1960acae4 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-02 20:32:00.745206 | orchestrator | | 760dbcbc7a13401f8f6334085ffd5f59 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-02 20:32:00.745213 | orchestrator | | 93eb80dfe5164e56a2f24fc3f2ca1530 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-02 20:32:00.745221 | orchestrator | | 958ca46b5ae84df983d9ab7c8a08dd63 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-02 20:32:00.745228 | orchestrator | | 9b0a35d8f7c14257ba690d502e55c0dd | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-02 20:32:00.745236 | orchestrator | | a3d01f2572824548bb15bee9dde46325 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-02 20:32:00.745243 | orchestrator | | a46225f51d26485ab04ca6d9aca10652 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-02 20:32:00.745250 | orchestrator | | bf4a7f9383c0403ea2901b98ae4e0c5b | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-02 20:32:00.745257 | orchestrator | | c49365faec9943f8a7896d4e0c079e9a | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-02 20:32:00.745283 | orchestrator | | c744fac9bc514282bfc352d57fae3a13 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-02 20:32:00.745305 | orchestrator | | cf9a28bab9dd4d80b3d5632ed21bf1fd | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-02 20:32:00.745313 | orchestrator | | e575d9e97cb9482db7ff84f0c7fab739 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-02 20:32:00.745320 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 20:32:00.972054 | orchestrator | 2025-06-02 20:32:00.972151 | orchestrator | # Cinder 2025-06-02 20:32:00.972166 | orchestrator | 2025-06-02 20:32:00.972176 | orchestrator | + echo 2025-06-02 20:32:00.972187 | orchestrator | + echo '# Cinder' 2025-06-02 20:32:00.972198 | orchestrator | + echo 2025-06-02 20:32:00.972209 | orchestrator | + openstack volume service list 2025-06-02 20:32:03.595164 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 20:32:03.595296 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-02 20:32:03.595348 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 20:32:03.595371 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-02T20:32:01.000000 | 2025-06-02 20:32:03.595389 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-02T20:32:00.000000 | 2025-06-02 20:32:03.595408 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-02T20:32:00.000000 | 2025-06-02 20:32:03.595426 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-02T20:32:00.000000 | 2025-06-02 20:32:03.595445 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-02T20:32:01.000000 | 2025-06-02 20:32:03.595488 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-02T20:32:01.000000 | 2025-06-02 20:32:03.595507 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-02T20:32:00.000000 | 2025-06-02 20:32:03.595526 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-02T20:32:00.000000 | 2025-06-02 20:32:03.595544 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-02T20:32:00.000000 | 2025-06-02 20:32:03.595563 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 20:32:03.833101 | orchestrator | 2025-06-02 20:32:03.833203 | orchestrator | # Neutron 2025-06-02 20:32:03.833218 | orchestrator | 2025-06-02 20:32:03.833230 | orchestrator | + echo 2025-06-02 20:32:03.833241 | orchestrator | + echo '# Neutron' 2025-06-02 20:32:03.833253 | orchestrator | + echo 2025-06-02 20:32:03.833264 | orchestrator | + openstack network agent list 2025-06-02 20:32:07.049762 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 20:32:07.049858 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-02 20:32:07.049869 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 20:32:07.049877 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-02 20:32:07.049885 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-02 20:32:07.049911 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-02 20:32:07.049919 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-02 20:32:07.049926 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-02 20:32:07.049933 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-02 20:32:07.049940 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 20:32:07.049947 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 20:32:07.049955 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 20:32:07.049962 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 20:32:07.337099 | orchestrator | + openstack network service provider list 2025-06-02 20:32:10.111738 | orchestrator | +---------------+------+---------+ 2025-06-02 20:32:10.111837 | orchestrator | | Service Type | Name | Default | 2025-06-02 20:32:10.111849 | orchestrator | +---------------+------+---------+ 2025-06-02 20:32:10.111859 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-02 20:32:10.111868 | orchestrator | +---------------+------+---------+ 2025-06-02 20:32:10.412800 | orchestrator | 2025-06-02 20:32:10.412871 | orchestrator | # Nova 2025-06-02 20:32:10.412878 | orchestrator | 2025-06-02 20:32:10.412884 | orchestrator | + echo 2025-06-02 20:32:10.412891 | orchestrator | + echo '# Nova' 2025-06-02 20:32:10.412898 | orchestrator | + echo 2025-06-02 20:32:10.412905 | orchestrator | + openstack compute service list 2025-06-02 20:32:13.014981 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 20:32:13.015115 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-02 20:32:13.015136 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 20:32:13.015151 | orchestrator | | bc28124c-cdb7-4d64-a8ff-9a40ccb37b1c | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-02T20:32:03.000000 | 2025-06-02 20:32:13.015167 | orchestrator | | 5d98be74-fbb3-4617-a75e-f0ad41d5739c | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-02T20:32:10.000000 | 2025-06-02 20:32:13.015184 | orchestrator | | 1c9e544d-549b-4473-9340-c24ecabb52af | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-02T20:32:10.000000 | 2025-06-02 20:32:13.015200 | orchestrator | | 72658c74-8596-4c2b-b682-ee80fe930907 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-02T20:32:06.000000 | 2025-06-02 20:32:13.015216 | orchestrator | | b245d4bd-b491-4548-9ae2-44225e74bf52 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-02T20:32:07.000000 | 2025-06-02 20:32:13.015232 | orchestrator | | e2226fe4-6944-4d2c-a6a6-21091be0552c | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-02T20:32:07.000000 | 2025-06-02 20:32:13.015273 | orchestrator | | f15c3e21-b31f-4b82-aadc-d2829b72687a | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-02T20:32:06.000000 | 2025-06-02 20:32:13.015286 | orchestrator | | 233b9d34-431a-46d2-91a7-d8604ad13940 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-02T20:32:07.000000 | 2025-06-02 20:32:13.015295 | orchestrator | | 09689d5e-c851-434f-bc01-5ff2d023a29c | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-02T20:32:07.000000 | 2025-06-02 20:32:13.015440 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 20:32:13.267417 | orchestrator | + openstack hypervisor list 2025-06-02 20:32:17.617254 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 20:32:17.617450 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-02 20:32:17.617470 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 20:32:17.617481 | orchestrator | | e4196560-8e56-4fd9-a029-da597fb18ee8 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-02 20:32:17.617493 | orchestrator | | 40943213-da30-41dd-be58-e0b3898fe640 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-02 20:32:17.617504 | orchestrator | | cf4520cb-38dd-465d-acce-dbaa81fdbe98 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-02 20:32:17.617515 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 20:32:17.873965 | orchestrator | 2025-06-02 20:32:17.874087 | orchestrator | # Run OpenStack test play 2025-06-02 20:32:17.874099 | orchestrator | 2025-06-02 20:32:17.874107 | orchestrator | + echo 2025-06-02 20:32:17.874116 | orchestrator | + echo '# Run OpenStack test play' 2025-06-02 20:32:17.874124 | orchestrator | + echo 2025-06-02 20:32:17.874132 | orchestrator | + osism apply --environment openstack test 2025-06-02 20:32:19.534420 | orchestrator | 2025-06-02 20:32:19 | INFO  | Trying to run play test in environment openstack 2025-06-02 20:32:19.538872 | orchestrator | Registering Redlock._acquired_script 2025-06-02 20:32:19.539095 | orchestrator | Registering Redlock._extend_script 2025-06-02 20:32:19.539149 | orchestrator | Registering Redlock._release_script 2025-06-02 20:32:19.614800 | orchestrator | 2025-06-02 20:32:19 | INFO  | Task 5571eee6-de0a-4c37-83e7-fc179ad14e37 (test) was prepared for execution. 2025-06-02 20:32:19.614937 | orchestrator | 2025-06-02 20:32:19 | INFO  | It takes a moment until task 5571eee6-de0a-4c37-83e7-fc179ad14e37 (test) has been started and output is visible here. 2025-06-02 20:32:23.511103 | orchestrator | 2025-06-02 20:32:23.511218 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-02 20:32:23.512436 | orchestrator | 2025-06-02 20:32:23.512459 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-02 20:32:23.513342 | orchestrator | Monday 02 June 2025 20:32:23 +0000 (0:00:00.074) 0:00:00.074 *********** 2025-06-02 20:32:27.085797 | orchestrator | changed: [localhost] 2025-06-02 20:32:27.086519 | orchestrator | 2025-06-02 20:32:27.088410 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-02 20:32:27.089842 | orchestrator | Monday 02 June 2025 20:32:27 +0000 (0:00:03.576) 0:00:03.651 *********** 2025-06-02 20:32:31.154621 | orchestrator | changed: [localhost] 2025-06-02 20:32:31.154784 | orchestrator | 2025-06-02 20:32:31.155009 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-02 20:32:31.157182 | orchestrator | Monday 02 June 2025 20:32:31 +0000 (0:00:04.067) 0:00:07.718 *********** 2025-06-02 20:32:37.206410 | orchestrator | changed: [localhost] 2025-06-02 20:32:37.206588 | orchestrator | 2025-06-02 20:32:37.207010 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-02 20:32:37.207877 | orchestrator | Monday 02 June 2025 20:32:37 +0000 (0:00:06.053) 0:00:13.772 *********** 2025-06-02 20:32:41.181723 | orchestrator | changed: [localhost] 2025-06-02 20:32:41.181828 | orchestrator | 2025-06-02 20:32:41.181842 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-02 20:32:41.182167 | orchestrator | Monday 02 June 2025 20:32:41 +0000 (0:00:03.973) 0:00:17.746 *********** 2025-06-02 20:32:45.262587 | orchestrator | changed: [localhost] 2025-06-02 20:32:45.263188 | orchestrator | 2025-06-02 20:32:45.264245 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-02 20:32:45.264788 | orchestrator | Monday 02 June 2025 20:32:45 +0000 (0:00:04.081) 0:00:21.828 *********** 2025-06-02 20:32:57.296435 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-02 20:32:57.296583 | orchestrator | changed: [localhost] => (item=member) 2025-06-02 20:32:57.296613 | orchestrator | changed: [localhost] => (item=creator) 2025-06-02 20:32:57.296633 | orchestrator | 2025-06-02 20:32:57.296656 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-02 20:32:57.296677 | orchestrator | Monday 02 June 2025 20:32:57 +0000 (0:00:12.027) 0:00:33.855 *********** 2025-06-02 20:33:02.267414 | orchestrator | changed: [localhost] 2025-06-02 20:33:02.267505 | orchestrator | 2025-06-02 20:33:02.267951 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-02 20:33:02.268591 | orchestrator | Monday 02 June 2025 20:33:02 +0000 (0:00:04.975) 0:00:38.831 *********** 2025-06-02 20:33:07.685866 | orchestrator | changed: [localhost] 2025-06-02 20:33:07.689151 | orchestrator | 2025-06-02 20:33:07.689198 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-02 20:33:07.691230 | orchestrator | Monday 02 June 2025 20:33:07 +0000 (0:00:05.419) 0:00:44.250 *********** 2025-06-02 20:33:11.925000 | orchestrator | changed: [localhost] 2025-06-02 20:33:11.926411 | orchestrator | 2025-06-02 20:33:11.926810 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-02 20:33:11.927682 | orchestrator | Monday 02 June 2025 20:33:11 +0000 (0:00:04.238) 0:00:48.489 *********** 2025-06-02 20:33:15.695138 | orchestrator | changed: [localhost] 2025-06-02 20:33:15.695248 | orchestrator | 2025-06-02 20:33:15.696488 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-02 20:33:15.697700 | orchestrator | Monday 02 June 2025 20:33:15 +0000 (0:00:03.770) 0:00:52.259 *********** 2025-06-02 20:33:19.641085 | orchestrator | changed: [localhost] 2025-06-02 20:33:19.641336 | orchestrator | 2025-06-02 20:33:19.642109 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-02 20:33:19.642679 | orchestrator | Monday 02 June 2025 20:33:19 +0000 (0:00:03.947) 0:00:56.207 *********** 2025-06-02 20:33:23.932378 | orchestrator | changed: [localhost] 2025-06-02 20:33:23.932485 | orchestrator | 2025-06-02 20:33:23.933258 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-02 20:33:23.934830 | orchestrator | Monday 02 June 2025 20:33:23 +0000 (0:00:04.289) 0:01:00.496 *********** 2025-06-02 20:33:40.490305 | orchestrator | changed: [localhost] 2025-06-02 20:33:40.490410 | orchestrator | 2025-06-02 20:33:40.492387 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-02 20:33:40.494647 | orchestrator | Monday 02 June 2025 20:33:40 +0000 (0:00:16.552) 0:01:17.049 *********** 2025-06-02 20:35:53.391256 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 20:35:53.391364 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 20:35:53.391373 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 20:35:53.391380 | orchestrator | 2025-06-02 20:35:53.391388 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 20:36:23.389667 | orchestrator | 2025-06-02 20:36:23.389774 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 20:36:53.388463 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 20:36:53.388571 | orchestrator | 2025-06-02 20:36:53.388581 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 20:37:03.085590 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 20:37:03.085692 | orchestrator | 2025-06-02 20:37:03.086461 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-02 20:37:03.086499 | orchestrator | Monday 02 June 2025 20:37:03 +0000 (0:03:22.603) 0:04:39.652 *********** 2025-06-02 20:37:26.149367 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 20:37:26.149467 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 20:37:26.149478 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 20:37:26.149485 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 20:37:26.149514 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 20:37:26.149521 | orchestrator | 2025-06-02 20:37:26.149529 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-02 20:37:26.149537 | orchestrator | Monday 02 June 2025 20:37:26 +0000 (0:00:23.054) 0:05:02.707 *********** 2025-06-02 20:37:58.096947 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 20:37:58.097035 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 20:37:58.097042 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 20:37:58.097048 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 20:37:58.097053 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 20:37:58.097058 | orchestrator | 2025-06-02 20:37:58.097064 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-02 20:37:58.097072 | orchestrator | Monday 02 June 2025 20:37:58 +0000 (0:00:31.949) 0:05:34.656 *********** 2025-06-02 20:38:05.393867 | orchestrator | changed: [localhost] 2025-06-02 20:38:05.393990 | orchestrator | 2025-06-02 20:38:05.394011 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-02 20:38:05.394329 | orchestrator | Monday 02 June 2025 20:38:05 +0000 (0:00:07.304) 0:05:41.960 *********** 2025-06-02 20:38:18.836576 | orchestrator | changed: [localhost] 2025-06-02 20:38:18.836715 | orchestrator | 2025-06-02 20:38:18.836734 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-02 20:38:18.836747 | orchestrator | Monday 02 June 2025 20:38:18 +0000 (0:00:13.440) 0:05:55.401 *********** 2025-06-02 20:38:24.041328 | orchestrator | ok: [localhost] 2025-06-02 20:38:24.041607 | orchestrator | 2025-06-02 20:38:24.042378 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-02 20:38:24.043048 | orchestrator | Monday 02 June 2025 20:38:24 +0000 (0:00:05.206) 0:06:00.607 *********** 2025-06-02 20:38:24.078848 | orchestrator | ok: [localhost] => { 2025-06-02 20:38:24.078953 | orchestrator |  "msg": "192.168.112.123" 2025-06-02 20:38:24.079932 | orchestrator | } 2025-06-02 20:38:24.080672 | orchestrator | 2025-06-02 20:38:24.081849 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 20:38:24.081928 | orchestrator | 2025-06-02 20:38:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 20:38:24.081957 | orchestrator | 2025-06-02 20:38:24 | INFO  | Please wait and do not abort execution. 2025-06-02 20:38:24.082355 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 20:38:24.082781 | orchestrator | 2025-06-02 20:38:24.083924 | orchestrator | 2025-06-02 20:38:24.084699 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 20:38:24.085196 | orchestrator | Monday 02 June 2025 20:38:24 +0000 (0:00:00.038) 0:06:00.646 *********** 2025-06-02 20:38:24.085603 | orchestrator | =============================================================================== 2025-06-02 20:38:24.086372 | orchestrator | Create test instances ------------------------------------------------- 202.60s 2025-06-02 20:38:24.086604 | orchestrator | Add tag to instances --------------------------------------------------- 31.95s 2025-06-02 20:38:24.087138 | orchestrator | Add metadata to instances ---------------------------------------------- 23.05s 2025-06-02 20:38:24.087751 | orchestrator | Create test network topology ------------------------------------------- 16.55s 2025-06-02 20:38:24.088050 | orchestrator | Attach test volume ----------------------------------------------------- 13.44s 2025-06-02 20:38:24.088792 | orchestrator | Add member roles to user test ------------------------------------------ 12.03s 2025-06-02 20:38:24.089358 | orchestrator | Create test volume ------------------------------------------------------ 7.30s 2025-06-02 20:38:24.089459 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.05s 2025-06-02 20:38:24.089955 | orchestrator | Create ssh security group ----------------------------------------------- 5.42s 2025-06-02 20:38:24.090700 | orchestrator | Create floating ip address ---------------------------------------------- 5.21s 2025-06-02 20:38:24.091052 | orchestrator | Create test server group ------------------------------------------------ 4.98s 2025-06-02 20:38:24.091262 | orchestrator | Create test keypair ----------------------------------------------------- 4.29s 2025-06-02 20:38:24.091886 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.24s 2025-06-02 20:38:24.093131 | orchestrator | Create test user -------------------------------------------------------- 4.08s 2025-06-02 20:38:24.093694 | orchestrator | Create test-admin user -------------------------------------------------- 4.07s 2025-06-02 20:38:24.094603 | orchestrator | Create test project ----------------------------------------------------- 3.97s 2025-06-02 20:38:24.095284 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.95s 2025-06-02 20:38:24.096041 | orchestrator | Create icmp security group ---------------------------------------------- 3.77s 2025-06-02 20:38:24.096894 | orchestrator | Create test domain ------------------------------------------------------ 3.58s 2025-06-02 20:38:24.098347 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-06-02 20:38:24.580480 | orchestrator | + server_list 2025-06-02 20:38:24.580547 | orchestrator | + openstack --os-cloud test server list 2025-06-02 20:38:28.360517 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 20:38:28.360621 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-02 20:38:28.360635 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 20:38:28.360647 | orchestrator | | 00f08e50-cb87-452d-9821-70e87fe29147 | test-4 | ACTIVE | auto_allocated_network=10.42.0.36, 192.168.112.145 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 20:38:28.360658 | orchestrator | | c051aed9-29d1-4a78-8f00-ce8c83422332 | test-3 | ACTIVE | auto_allocated_network=10.42.0.34, 192.168.112.197 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 20:38:28.360669 | orchestrator | | a362cf5b-9970-47ea-a83e-2c7dad598cff | test-2 | ACTIVE | auto_allocated_network=10.42.0.24, 192.168.112.167 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 20:38:28.360680 | orchestrator | | ccdec7a0-b4ee-4a0d-8215-9ffa0be2b84e | test-1 | ACTIVE | auto_allocated_network=10.42.0.41, 192.168.112.128 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 20:38:28.360691 | orchestrator | | c247ec47-0b8c-4716-8392-de75db8d6160 | test | ACTIVE | auto_allocated_network=10.42.0.28, 192.168.112.123 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 20:38:28.360702 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 20:38:28.611063 | orchestrator | + openstack --os-cloud test server show test 2025-06-02 20:38:32.039149 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:32.039241 | orchestrator | | Field | Value | 2025-06-02 20:38:32.039254 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:32.039264 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 20:38:32.039293 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 20:38:32.039304 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 20:38:32.039321 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-02 20:38:32.039331 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 20:38:32.039341 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 20:38:32.039352 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 20:38:32.039361 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 20:38:32.039385 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 20:38:32.039394 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 20:38:32.039404 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 20:38:32.039419 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 20:38:32.039429 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 20:38:32.039443 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 20:38:32.039453 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 20:38:32.039462 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T20:34:10.000000 | 2025-06-02 20:38:32.039472 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 20:38:32.039483 | orchestrator | | accessIPv4 | | 2025-06-02 20:38:32.039492 | orchestrator | | accessIPv6 | | 2025-06-02 20:38:32.039501 | orchestrator | | addresses | auto_allocated_network=10.42.0.28, 192.168.112.123 | 2025-06-02 20:38:32.039516 | orchestrator | | config_drive | | 2025-06-02 20:38:32.039526 | orchestrator | | created | 2025-06-02T20:33:48Z | 2025-06-02 20:38:32.039544 | orchestrator | | description | None | 2025-06-02 20:38:32.039555 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 20:38:32.039565 | orchestrator | | hostId | 40bbb97d5c29984fe40d85a2a3e974639b4ea75d970a69fe269f2182 | 2025-06-02 20:38:32.039579 | orchestrator | | host_status | None | 2025-06-02 20:38:32.039590 | orchestrator | | id | c247ec47-0b8c-4716-8392-de75db8d6160 | 2025-06-02 20:38:32.039600 | orchestrator | | image | Cirros 0.6.2 (d117d3a9-37fe-4947-b092-1706c647ed01) | 2025-06-02 20:38:32.039610 | orchestrator | | key_name | test | 2025-06-02 20:38:32.039620 | orchestrator | | locked | False | 2025-06-02 20:38:32.039631 | orchestrator | | locked_reason | None | 2025-06-02 20:38:32.039642 | orchestrator | | name | test | 2025-06-02 20:38:32.039658 | orchestrator | | pinned_availability_zone | None | 2025-06-02 20:38:32.039675 | orchestrator | | progress | 0 | 2025-06-02 20:38:32.039686 | orchestrator | | project_id | ed31582924684aba966c93fbdcb9e5e5 | 2025-06-02 20:38:32.039697 | orchestrator | | properties | hostname='test' | 2025-06-02 20:38:32.039712 | orchestrator | | security_groups | name='icmp' | 2025-06-02 20:38:32.039723 | orchestrator | | | name='ssh' | 2025-06-02 20:38:32.039734 | orchestrator | | server_groups | None | 2025-06-02 20:38:32.039746 | orchestrator | | status | ACTIVE | 2025-06-02 20:38:32.039757 | orchestrator | | tags | test | 2025-06-02 20:38:32.039768 | orchestrator | | trusted_image_certificates | None | 2025-06-02 20:38:32.039780 | orchestrator | | updated | 2025-06-02T20:37:07Z | 2025-06-02 20:38:32.039796 | orchestrator | | user_id | 7114783552464ff38684035e7936e240 | 2025-06-02 20:38:32.039820 | orchestrator | | volumes_attached | delete_on_termination='False', id='2cabba8c-3ec2-4590-933e-711abf6431f7' | 2025-06-02 20:38:32.042969 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:32.298672 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-02 20:38:35.570293 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:35.570392 | orchestrator | | Field | Value | 2025-06-02 20:38:35.570421 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:35.570430 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 20:38:35.570439 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 20:38:35.570449 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 20:38:35.570457 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-02 20:38:35.570467 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 20:38:35.570495 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 20:38:35.570503 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 20:38:35.570508 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 20:38:35.570527 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 20:38:35.570533 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 20:38:35.570538 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 20:38:35.570544 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 20:38:35.570549 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 20:38:35.570561 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 20:38:35.570567 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 20:38:35.570572 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T20:34:54.000000 | 2025-06-02 20:38:35.570582 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 20:38:35.570587 | orchestrator | | accessIPv4 | | 2025-06-02 20:38:35.570592 | orchestrator | | accessIPv6 | | 2025-06-02 20:38:35.570597 | orchestrator | | addresses | auto_allocated_network=10.42.0.41, 192.168.112.128 | 2025-06-02 20:38:35.570606 | orchestrator | | config_drive | | 2025-06-02 20:38:35.570612 | orchestrator | | created | 2025-06-02T20:34:33Z | 2025-06-02 20:38:35.570620 | orchestrator | | description | None | 2025-06-02 20:38:35.570625 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 20:38:35.570631 | orchestrator | | hostId | 0c127f24da936eacca9bcc88834ce8c62a88da372c2affd486018147 | 2025-06-02 20:38:35.570636 | orchestrator | | host_status | None | 2025-06-02 20:38:35.570641 | orchestrator | | id | ccdec7a0-b4ee-4a0d-8215-9ffa0be2b84e | 2025-06-02 20:38:35.570650 | orchestrator | | image | Cirros 0.6.2 (d117d3a9-37fe-4947-b092-1706c647ed01) | 2025-06-02 20:38:35.570655 | orchestrator | | key_name | test | 2025-06-02 20:38:35.570661 | orchestrator | | locked | False | 2025-06-02 20:38:35.570666 | orchestrator | | locked_reason | None | 2025-06-02 20:38:35.570671 | orchestrator | | name | test-1 | 2025-06-02 20:38:35.570680 | orchestrator | | pinned_availability_zone | None | 2025-06-02 20:38:35.570685 | orchestrator | | progress | 0 | 2025-06-02 20:38:35.570694 | orchestrator | | project_id | ed31582924684aba966c93fbdcb9e5e5 | 2025-06-02 20:38:35.570699 | orchestrator | | properties | hostname='test-1' | 2025-06-02 20:38:35.570704 | orchestrator | | security_groups | name='icmp' | 2025-06-02 20:38:35.570710 | orchestrator | | | name='ssh' | 2025-06-02 20:38:35.570719 | orchestrator | | server_groups | None | 2025-06-02 20:38:35.570724 | orchestrator | | status | ACTIVE | 2025-06-02 20:38:35.570730 | orchestrator | | tags | test | 2025-06-02 20:38:35.570736 | orchestrator | | trusted_image_certificates | None | 2025-06-02 20:38:35.570741 | orchestrator | | updated | 2025-06-02T20:37:12Z | 2025-06-02 20:38:35.570749 | orchestrator | | user_id | 7114783552464ff38684035e7936e240 | 2025-06-02 20:38:35.570754 | orchestrator | | volumes_attached | | 2025-06-02 20:38:35.575831 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:35.819878 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-02 20:38:38.897821 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:38.897924 | orchestrator | | Field | Value | 2025-06-02 20:38:38.897971 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:38.897984 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 20:38:38.897995 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 20:38:38.898006 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 20:38:38.898142 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-02 20:38:38.898166 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 20:38:38.898177 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 20:38:38.898189 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 20:38:38.898236 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 20:38:38.898300 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 20:38:38.898327 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 20:38:38.898361 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 20:38:38.898378 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 20:38:38.898397 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 20:38:38.898416 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 20:38:38.898436 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 20:38:38.898455 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T20:35:37.000000 | 2025-06-02 20:38:38.898474 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 20:38:38.898489 | orchestrator | | accessIPv4 | | 2025-06-02 20:38:38.898501 | orchestrator | | accessIPv6 | | 2025-06-02 20:38:38.898512 | orchestrator | | addresses | auto_allocated_network=10.42.0.24, 192.168.112.167 | 2025-06-02 20:38:38.898534 | orchestrator | | config_drive | | 2025-06-02 20:38:38.898554 | orchestrator | | created | 2025-06-02T20:35:14Z | 2025-06-02 20:38:38.898565 | orchestrator | | description | None | 2025-06-02 20:38:38.898576 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 20:38:38.898587 | orchestrator | | hostId | e10d631310ef8d7ca7517c909a543db393d2d50b021eca19709215d7 | 2025-06-02 20:38:38.898598 | orchestrator | | host_status | None | 2025-06-02 20:38:38.898609 | orchestrator | | id | a362cf5b-9970-47ea-a83e-2c7dad598cff | 2025-06-02 20:38:38.898620 | orchestrator | | image | Cirros 0.6.2 (d117d3a9-37fe-4947-b092-1706c647ed01) | 2025-06-02 20:38:38.898631 | orchestrator | | key_name | test | 2025-06-02 20:38:38.898642 | orchestrator | | locked | False | 2025-06-02 20:38:38.898653 | orchestrator | | locked_reason | None | 2025-06-02 20:38:38.898692 | orchestrator | | name | test-2 | 2025-06-02 20:38:38.898710 | orchestrator | | pinned_availability_zone | None | 2025-06-02 20:38:38.898722 | orchestrator | | progress | 0 | 2025-06-02 20:38:38.898733 | orchestrator | | project_id | ed31582924684aba966c93fbdcb9e5e5 | 2025-06-02 20:38:38.898744 | orchestrator | | properties | hostname='test-2' | 2025-06-02 20:38:38.898754 | orchestrator | | security_groups | name='icmp' | 2025-06-02 20:38:38.898765 | orchestrator | | | name='ssh' | 2025-06-02 20:38:38.898777 | orchestrator | | server_groups | None | 2025-06-02 20:38:38.898788 | orchestrator | | status | ACTIVE | 2025-06-02 20:38:38.898799 | orchestrator | | tags | test | 2025-06-02 20:38:38.898810 | orchestrator | | trusted_image_certificates | None | 2025-06-02 20:38:38.898828 | orchestrator | | updated | 2025-06-02T20:37:16Z | 2025-06-02 20:38:38.898850 | orchestrator | | user_id | 7114783552464ff38684035e7936e240 | 2025-06-02 20:38:38.898861 | orchestrator | | volumes_attached | | 2025-06-02 20:38:38.901922 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:39.157391 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-02 20:38:42.376869 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:42.376965 | orchestrator | | Field | Value | 2025-06-02 20:38:42.376977 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:42.376986 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 20:38:42.376995 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 20:38:42.377003 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 20:38:42.377011 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-02 20:38:42.377039 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 20:38:42.377089 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 20:38:42.377099 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 20:38:42.377117 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 20:38:42.377147 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 20:38:42.377156 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 20:38:42.377164 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 20:38:42.377172 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 20:38:42.377181 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 20:38:42.377189 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 20:38:42.377197 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 20:38:42.377211 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T20:36:13.000000 | 2025-06-02 20:38:42.377219 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 20:38:42.377232 | orchestrator | | accessIPv4 | | 2025-06-02 20:38:42.377240 | orchestrator | | accessIPv6 | | 2025-06-02 20:38:42.377249 | orchestrator | | addresses | auto_allocated_network=10.42.0.34, 192.168.112.197 | 2025-06-02 20:38:42.377261 | orchestrator | | config_drive | | 2025-06-02 20:38:42.377270 | orchestrator | | created | 2025-06-02T20:35:57Z | 2025-06-02 20:38:42.377278 | orchestrator | | description | None | 2025-06-02 20:38:42.377286 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 20:38:42.377295 | orchestrator | | hostId | 40bbb97d5c29984fe40d85a2a3e974639b4ea75d970a69fe269f2182 | 2025-06-02 20:38:42.377309 | orchestrator | | host_status | None | 2025-06-02 20:38:42.377335 | orchestrator | | id | c051aed9-29d1-4a78-8f00-ce8c83422332 | 2025-06-02 20:38:42.377348 | orchestrator | | image | Cirros 0.6.2 (d117d3a9-37fe-4947-b092-1706c647ed01) | 2025-06-02 20:38:42.377362 | orchestrator | | key_name | test | 2025-06-02 20:38:42.377381 | orchestrator | | locked | False | 2025-06-02 20:38:42.377395 | orchestrator | | locked_reason | None | 2025-06-02 20:38:42.377409 | orchestrator | | name | test-3 | 2025-06-02 20:38:42.377430 | orchestrator | | pinned_availability_zone | None | 2025-06-02 20:38:42.377444 | orchestrator | | progress | 0 | 2025-06-02 20:38:42.377458 | orchestrator | | project_id | ed31582924684aba966c93fbdcb9e5e5 | 2025-06-02 20:38:42.377470 | orchestrator | | properties | hostname='test-3' | 2025-06-02 20:38:42.377484 | orchestrator | | security_groups | name='icmp' | 2025-06-02 20:38:42.377493 | orchestrator | | | name='ssh' | 2025-06-02 20:38:42.377501 | orchestrator | | server_groups | None | 2025-06-02 20:38:42.377509 | orchestrator | | status | ACTIVE | 2025-06-02 20:38:42.377517 | orchestrator | | tags | test | 2025-06-02 20:38:42.377529 | orchestrator | | trusted_image_certificates | None | 2025-06-02 20:38:42.377538 | orchestrator | | updated | 2025-06-02T20:37:21Z | 2025-06-02 20:38:42.377550 | orchestrator | | user_id | 7114783552464ff38684035e7936e240 | 2025-06-02 20:38:42.377559 | orchestrator | | volumes_attached | | 2025-06-02 20:38:42.380772 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:42.653856 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-02 20:38:45.739533 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:45.739650 | orchestrator | | Field | Value | 2025-06-02 20:38:45.739663 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:45.739673 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 20:38:45.739682 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 20:38:45.739692 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 20:38:45.739701 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-02 20:38:45.739710 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 20:38:45.739719 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 20:38:45.739728 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 20:38:45.739737 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 20:38:45.739778 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 20:38:45.739797 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 20:38:45.739807 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 20:38:45.739816 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 20:38:45.739826 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 20:38:45.739835 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 20:38:45.739845 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 20:38:45.739859 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T20:36:47.000000 | 2025-06-02 20:38:45.739868 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 20:38:45.739878 | orchestrator | | accessIPv4 | | 2025-06-02 20:38:45.739887 | orchestrator | | accessIPv6 | | 2025-06-02 20:38:45.739902 | orchestrator | | addresses | auto_allocated_network=10.42.0.36, 192.168.112.145 | 2025-06-02 20:38:45.739918 | orchestrator | | config_drive | | 2025-06-02 20:38:45.739927 | orchestrator | | created | 2025-06-02T20:36:30Z | 2025-06-02 20:38:45.739937 | orchestrator | | description | None | 2025-06-02 20:38:45.739946 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 20:38:45.739955 | orchestrator | | hostId | 0c127f24da936eacca9bcc88834ce8c62a88da372c2affd486018147 | 2025-06-02 20:38:45.739964 | orchestrator | | host_status | None | 2025-06-02 20:38:45.739978 | orchestrator | | id | 00f08e50-cb87-452d-9821-70e87fe29147 | 2025-06-02 20:38:45.739988 | orchestrator | | image | Cirros 0.6.2 (d117d3a9-37fe-4947-b092-1706c647ed01) | 2025-06-02 20:38:45.739997 | orchestrator | | key_name | test | 2025-06-02 20:38:45.740006 | orchestrator | | locked | False | 2025-06-02 20:38:45.740021 | orchestrator | | locked_reason | None | 2025-06-02 20:38:45.740030 | orchestrator | | name | test-4 | 2025-06-02 20:38:45.740045 | orchestrator | | pinned_availability_zone | None | 2025-06-02 20:38:45.740079 | orchestrator | | progress | 0 | 2025-06-02 20:38:45.740090 | orchestrator | | project_id | ed31582924684aba966c93fbdcb9e5e5 | 2025-06-02 20:38:45.740100 | orchestrator | | properties | hostname='test-4' | 2025-06-02 20:38:45.740111 | orchestrator | | security_groups | name='icmp' | 2025-06-02 20:38:45.740121 | orchestrator | | | name='ssh' | 2025-06-02 20:38:45.740136 | orchestrator | | server_groups | None | 2025-06-02 20:38:45.740146 | orchestrator | | status | ACTIVE | 2025-06-02 20:38:45.740157 | orchestrator | | tags | test | 2025-06-02 20:38:45.740174 | orchestrator | | trusted_image_certificates | None | 2025-06-02 20:38:45.740185 | orchestrator | | updated | 2025-06-02T20:37:25Z | 2025-06-02 20:38:45.740200 | orchestrator | | user_id | 7114783552464ff38684035e7936e240 | 2025-06-02 20:38:45.740211 | orchestrator | | volumes_attached | | 2025-06-02 20:38:45.743979 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 20:38:45.989619 | orchestrator | + server_ping 2025-06-02 20:38:45.991738 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 20:38:45.991780 | orchestrator | ++ tr -d '\r' 2025-06-02 20:38:48.820646 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:38:48.820738 | orchestrator | + ping -c3 192.168.112.145 2025-06-02 20:38:48.835776 | orchestrator | PING 192.168.112.145 (192.168.112.145) 56(84) bytes of data. 2025-06-02 20:38:48.835857 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=1 ttl=63 time=7.87 ms 2025-06-02 20:38:49.831994 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=2 ttl=63 time=2.88 ms 2025-06-02 20:38:50.833677 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=3 ttl=63 time=2.23 ms 2025-06-02 20:38:50.833774 | orchestrator | 2025-06-02 20:38:50.833789 | orchestrator | --- 192.168.112.145 ping statistics --- 2025-06-02 20:38:50.833800 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 20:38:50.833811 | orchestrator | rtt min/avg/max/mdev = 2.230/4.326/7.869/2.519 ms 2025-06-02 20:38:50.833822 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:38:50.833832 | orchestrator | + ping -c3 192.168.112.197 2025-06-02 20:38:50.845587 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2025-06-02 20:38:50.845695 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=7.35 ms 2025-06-02 20:38:51.842213 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.56 ms 2025-06-02 20:38:52.843415 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.92 ms 2025-06-02 20:38:52.843494 | orchestrator | 2025-06-02 20:38:52.843500 | orchestrator | --- 192.168.112.197 ping statistics --- 2025-06-02 20:38:52.843507 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:38:52.843512 | orchestrator | rtt min/avg/max/mdev = 1.915/3.940/7.348/2.424 ms 2025-06-02 20:38:52.844585 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:38:52.844670 | orchestrator | + ping -c3 192.168.112.167 2025-06-02 20:38:52.860842 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2025-06-02 20:38:52.860928 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=14.1 ms 2025-06-02 20:38:53.850692 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.54 ms 2025-06-02 20:38:54.852378 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=2.40 ms 2025-06-02 20:38:54.852452 | orchestrator | 2025-06-02 20:38:54.852458 | orchestrator | --- 192.168.112.167 ping statistics --- 2025-06-02 20:38:54.852464 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:38:54.852468 | orchestrator | rtt min/avg/max/mdev = 2.401/6.349/14.108/5.486 ms 2025-06-02 20:38:54.853539 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:38:54.853561 | orchestrator | + ping -c3 192.168.112.128 2025-06-02 20:38:54.866396 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2025-06-02 20:38:54.866487 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=8.06 ms 2025-06-02 20:38:55.861874 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=2.72 ms 2025-06-02 20:38:56.863523 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=2.46 ms 2025-06-02 20:38:56.863602 | orchestrator | 2025-06-02 20:38:56.863610 | orchestrator | --- 192.168.112.128 ping statistics --- 2025-06-02 20:38:56.863617 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:38:56.863622 | orchestrator | rtt min/avg/max/mdev = 2.456/4.410/8.057/2.581 ms 2025-06-02 20:38:56.864616 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 20:38:56.864635 | orchestrator | + ping -c3 192.168.112.123 2025-06-02 20:38:56.874543 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-06-02 20:38:56.874634 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=5.50 ms 2025-06-02 20:38:57.874155 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.85 ms 2025-06-02 20:38:58.874729 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=1.91 ms 2025-06-02 20:38:58.874821 | orchestrator | 2025-06-02 20:38:58.874844 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-06-02 20:38:58.874863 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 20:38:58.874877 | orchestrator | rtt min/avg/max/mdev = 1.905/3.418/5.503/1.523 ms 2025-06-02 20:38:58.874890 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-02 20:38:59.235656 | orchestrator | ok: Runtime: 0:11:10.848450 2025-06-02 20:38:59.306452 | 2025-06-02 20:38:59.306599 | TASK [Run tempest] 2025-06-02 20:38:59.845372 | orchestrator | skipping: Conditional result was False 2025-06-02 20:38:59.863930 | 2025-06-02 20:38:59.864095 | TASK [Check prometheus alert status] 2025-06-02 20:39:00.400619 | orchestrator | skipping: Conditional result was False 2025-06-02 20:39:00.404167 | 2025-06-02 20:39:00.404351 | PLAY RECAP 2025-06-02 20:39:00.404564 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-02 20:39:00.404631 | 2025-06-02 20:39:00.633330 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-02 20:39:00.634713 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 20:39:01.390103 | 2025-06-02 20:39:01.390261 | PLAY [Post output play] 2025-06-02 20:39:01.406111 | 2025-06-02 20:39:01.406254 | LOOP [stage-output : Register sources] 2025-06-02 20:39:01.474525 | 2025-06-02 20:39:01.474935 | TASK [stage-output : Check sudo] 2025-06-02 20:39:02.329488 | orchestrator | sudo: a password is required 2025-06-02 20:39:02.515697 | orchestrator | ok: Runtime: 0:00:00.015160 2025-06-02 20:39:02.526158 | 2025-06-02 20:39:02.526311 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-02 20:39:02.565590 | 2025-06-02 20:39:02.566337 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-02 20:39:02.645993 | orchestrator | ok 2025-06-02 20:39:02.655094 | 2025-06-02 20:39:02.655252 | LOOP [stage-output : Ensure target folders exist] 2025-06-02 20:39:03.137005 | orchestrator | ok: "docs" 2025-06-02 20:39:03.137380 | 2025-06-02 20:39:03.397385 | orchestrator | ok: "artifacts" 2025-06-02 20:39:03.665840 | orchestrator | ok: "logs" 2025-06-02 20:39:03.683187 | 2025-06-02 20:39:03.683368 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-02 20:39:03.724311 | 2025-06-02 20:39:03.724675 | TASK [stage-output : Make all log files readable] 2025-06-02 20:39:04.037792 | orchestrator | ok 2025-06-02 20:39:04.046688 | 2025-06-02 20:39:04.046826 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-02 20:39:04.091509 | orchestrator | skipping: Conditional result was False 2025-06-02 20:39:04.106092 | 2025-06-02 20:39:04.106276 | TASK [stage-output : Discover log files for compression] 2025-06-02 20:39:04.130536 | orchestrator | skipping: Conditional result was False 2025-06-02 20:39:04.140572 | 2025-06-02 20:39:04.140698 | LOOP [stage-output : Archive everything from logs] 2025-06-02 20:39:04.185709 | 2025-06-02 20:39:04.185873 | PLAY [Post cleanup play] 2025-06-02 20:39:04.193809 | 2025-06-02 20:39:04.193917 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 20:39:04.251787 | orchestrator | ok 2025-06-02 20:39:04.263555 | 2025-06-02 20:39:04.263695 | TASK [Set cloud fact (local deployment)] 2025-06-02 20:39:04.297561 | orchestrator | skipping: Conditional result was False 2025-06-02 20:39:04.308093 | 2025-06-02 20:39:04.308229 | TASK [Clean the cloud environment] 2025-06-02 20:39:04.951843 | orchestrator | 2025-06-02 20:39:04 - clean up servers 2025-06-02 20:39:05.709357 | orchestrator | 2025-06-02 20:39:05 - testbed-manager 2025-06-02 20:39:05.798973 | orchestrator | 2025-06-02 20:39:05 - testbed-node-4 2025-06-02 20:39:05.915968 | orchestrator | 2025-06-02 20:39:05 - testbed-node-3 2025-06-02 20:39:06.021333 | orchestrator | 2025-06-02 20:39:06 - testbed-node-1 2025-06-02 20:39:06.116041 | orchestrator | 2025-06-02 20:39:06 - testbed-node-2 2025-06-02 20:39:06.216906 | orchestrator | 2025-06-02 20:39:06 - testbed-node-0 2025-06-02 20:39:06.305032 | orchestrator | 2025-06-02 20:39:06 - testbed-node-5 2025-06-02 20:39:06.398137 | orchestrator | 2025-06-02 20:39:06 - clean up keypairs 2025-06-02 20:39:06.413149 | orchestrator | 2025-06-02 20:39:06 - testbed 2025-06-02 20:39:06.436839 | orchestrator | 2025-06-02 20:39:06 - wait for servers to be gone 2025-06-02 20:39:19.750799 | orchestrator | 2025-06-02 20:39:19 - clean up ports 2025-06-02 20:39:19.962954 | orchestrator | 2025-06-02 20:39:19 - 092c28d3-fd98-47ba-8eeb-028ea29babf3 2025-06-02 20:39:20.258188 | orchestrator | 2025-06-02 20:39:20 - 1f6683ce-6e17-4617-b126-7c709f9c502d 2025-06-02 20:39:20.548752 | orchestrator | 2025-06-02 20:39:20 - 7cfa3ea2-7d79-4249-9dc3-f37a722cdc4f 2025-06-02 20:39:20.996650 | orchestrator | 2025-06-02 20:39:20 - 8e35e266-5c49-45e0-a2be-249e7b388e28 2025-06-02 20:39:21.205601 | orchestrator | 2025-06-02 20:39:21 - ade686cc-2dba-40ac-89eb-5fce8810767f 2025-06-02 20:39:21.449851 | orchestrator | 2025-06-02 20:39:21 - c496f668-5cd2-4e9a-8aa1-6ef0cb301e96 2025-06-02 20:39:21.650593 | orchestrator | 2025-06-02 20:39:21 - c6b71504-167a-410c-9260-7d67b8f5a1f9 2025-06-02 20:39:22.282762 | orchestrator | 2025-06-02 20:39:22 - clean up volumes 2025-06-02 20:39:22.424728 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-2-node-base 2025-06-02 20:39:22.462962 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-0-node-base 2025-06-02 20:39:22.502980 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-1-node-base 2025-06-02 20:39:22.544348 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-4-node-base 2025-06-02 20:39:22.590130 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-3-node-base 2025-06-02 20:39:22.635992 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-5-node-base 2025-06-02 20:39:22.682774 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-manager-base 2025-06-02 20:39:22.729114 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-0-node-3 2025-06-02 20:39:22.777704 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-7-node-4 2025-06-02 20:39:22.818690 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-4-node-4 2025-06-02 20:39:22.862099 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-3-node-3 2025-06-02 20:39:22.902357 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-8-node-5 2025-06-02 20:39:22.945007 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-1-node-4 2025-06-02 20:39:22.986664 | orchestrator | 2025-06-02 20:39:22 - testbed-volume-5-node-5 2025-06-02 20:39:23.029193 | orchestrator | 2025-06-02 20:39:23 - testbed-volume-6-node-3 2025-06-02 20:39:23.072149 | orchestrator | 2025-06-02 20:39:23 - testbed-volume-2-node-5 2025-06-02 20:39:23.120049 | orchestrator | 2025-06-02 20:39:23 - disconnect routers 2025-06-02 20:39:23.218264 | orchestrator | 2025-06-02 20:39:23 - testbed 2025-06-02 20:39:24.193763 | orchestrator | 2025-06-02 20:39:24 - clean up subnets 2025-06-02 20:39:24.255360 | orchestrator | 2025-06-02 20:39:24 - subnet-testbed-management 2025-06-02 20:39:24.417623 | orchestrator | 2025-06-02 20:39:24 - clean up networks 2025-06-02 20:39:24.592334 | orchestrator | 2025-06-02 20:39:24 - net-testbed-management 2025-06-02 20:39:24.866839 | orchestrator | 2025-06-02 20:39:24 - clean up security groups 2025-06-02 20:39:24.900916 | orchestrator | 2025-06-02 20:39:24 - testbed-node 2025-06-02 20:39:25.005367 | orchestrator | 2025-06-02 20:39:25 - testbed-management 2025-06-02 20:39:25.111029 | orchestrator | 2025-06-02 20:39:25 - clean up floating ips 2025-06-02 20:39:25.141415 | orchestrator | 2025-06-02 20:39:25 - 81.163.192.191 2025-06-02 20:39:25.492357 | orchestrator | 2025-06-02 20:39:25 - clean up routers 2025-06-02 20:39:25.611414 | orchestrator | 2025-06-02 20:39:25 - testbed 2025-06-02 20:39:26.448180 | orchestrator | ok: Runtime: 0:00:21.749256 2025-06-02 20:39:26.453036 | 2025-06-02 20:39:26.453351 | PLAY RECAP 2025-06-02 20:39:26.453847 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-02 20:39:26.454045 | 2025-06-02 20:39:26.645029 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 20:39:26.647489 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 20:39:27.479465 | 2025-06-02 20:39:27.479644 | PLAY [Cleanup play] 2025-06-02 20:39:27.496565 | 2025-06-02 20:39:27.496732 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 20:39:27.577213 | orchestrator | ok 2025-06-02 20:39:27.588891 | 2025-06-02 20:39:27.589085 | TASK [Set cloud fact (local deployment)] 2025-06-02 20:39:27.614232 | orchestrator | skipping: Conditional result was False 2025-06-02 20:39:27.629955 | 2025-06-02 20:39:27.630108 | TASK [Clean the cloud environment] 2025-06-02 20:39:28.801340 | orchestrator | 2025-06-02 20:39:28 - clean up servers 2025-06-02 20:39:29.275674 | orchestrator | 2025-06-02 20:39:29 - clean up keypairs 2025-06-02 20:39:29.296567 | orchestrator | 2025-06-02 20:39:29 - wait for servers to be gone 2025-06-02 20:39:29.337599 | orchestrator | 2025-06-02 20:39:29 - clean up ports 2025-06-02 20:39:29.410070 | orchestrator | 2025-06-02 20:39:29 - clean up volumes 2025-06-02 20:39:29.476398 | orchestrator | 2025-06-02 20:39:29 - disconnect routers 2025-06-02 20:39:29.505139 | orchestrator | 2025-06-02 20:39:29 - clean up subnets 2025-06-02 20:39:29.525050 | orchestrator | 2025-06-02 20:39:29 - clean up networks 2025-06-02 20:39:30.107651 | orchestrator | 2025-06-02 20:39:30 - clean up security groups 2025-06-02 20:39:30.142315 | orchestrator | 2025-06-02 20:39:30 - clean up floating ips 2025-06-02 20:39:30.170699 | orchestrator | 2025-06-02 20:39:30 - clean up routers 2025-06-02 20:39:30.676008 | orchestrator | ok: Runtime: 0:00:01.781112 2025-06-02 20:39:30.678002 | 2025-06-02 20:39:30.678090 | PLAY RECAP 2025-06-02 20:39:30.678145 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-02 20:39:30.678170 | 2025-06-02 20:39:30.816307 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 20:39:30.817558 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 20:39:31.581607 | 2025-06-02 20:39:31.581771 | PLAY [Base post-fetch] 2025-06-02 20:39:31.596969 | 2025-06-02 20:39:31.597100 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-02 20:39:31.652835 | orchestrator | skipping: Conditional result was False 2025-06-02 20:39:31.668203 | 2025-06-02 20:39:31.668432 | TASK [fetch-output : Set log path for single node] 2025-06-02 20:39:31.715862 | orchestrator | ok 2025-06-02 20:39:31.724122 | 2025-06-02 20:39:31.724251 | LOOP [fetch-output : Ensure local output dirs] 2025-06-02 20:39:32.193559 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/99ae87a14d3a4e8eb8632c860e169cea/work/logs" 2025-06-02 20:39:32.472001 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/99ae87a14d3a4e8eb8632c860e169cea/work/artifacts" 2025-06-02 20:39:32.745193 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/99ae87a14d3a4e8eb8632c860e169cea/work/docs" 2025-06-02 20:39:32.772518 | 2025-06-02 20:39:32.772675 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-02 20:39:33.727872 | orchestrator | changed: .d..t...... ./ 2025-06-02 20:39:33.728131 | orchestrator | changed: All items complete 2025-06-02 20:39:33.728169 | 2025-06-02 20:39:34.464002 | orchestrator | changed: .d..t...... ./ 2025-06-02 20:39:35.208109 | orchestrator | changed: .d..t...... ./ 2025-06-02 20:39:35.229175 | 2025-06-02 20:39:35.229509 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-02 20:39:35.268052 | orchestrator | skipping: Conditional result was False 2025-06-02 20:39:35.271635 | orchestrator | skipping: Conditional result was False 2025-06-02 20:39:35.283588 | 2025-06-02 20:39:35.283677 | PLAY RECAP 2025-06-02 20:39:35.283734 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-02 20:39:35.283763 | 2025-06-02 20:39:35.410685 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 20:39:35.411730 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 20:39:36.152097 | 2025-06-02 20:39:36.152264 | PLAY [Base post] 2025-06-02 20:39:36.167191 | 2025-06-02 20:39:36.167331 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-02 20:39:37.213358 | orchestrator | changed 2025-06-02 20:39:37.241157 | 2025-06-02 20:39:37.241467 | PLAY RECAP 2025-06-02 20:39:37.241655 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-02 20:39:37.241831 | 2025-06-02 20:39:37.368199 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 20:39:37.369310 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-02 20:39:38.143355 | 2025-06-02 20:39:38.143570 | PLAY [Base post-logs] 2025-06-02 20:39:38.154291 | 2025-06-02 20:39:38.154451 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-02 20:39:38.658552 | localhost | changed 2025-06-02 20:39:38.668720 | 2025-06-02 20:39:38.668865 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-02 20:39:38.696662 | localhost | ok 2025-06-02 20:39:38.702697 | 2025-06-02 20:39:38.702871 | TASK [Set zuul-log-path fact] 2025-06-02 20:39:38.731656 | localhost | ok 2025-06-02 20:39:38.746880 | 2025-06-02 20:39:38.747040 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 20:39:38.783501 | localhost | ok 2025-06-02 20:39:38.788113 | 2025-06-02 20:39:38.788232 | TASK [upload-logs : Create log directories] 2025-06-02 20:39:39.364697 | localhost | changed 2025-06-02 20:39:39.368221 | 2025-06-02 20:39:39.368344 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-02 20:39:39.946232 | localhost -> localhost | ok: Runtime: 0:00:00.007524 2025-06-02 20:39:39.955413 | 2025-06-02 20:39:39.955678 | TASK [upload-logs : Upload logs to log server] 2025-06-02 20:39:40.515762 | localhost | Output suppressed because no_log was given 2025-06-02 20:39:40.520535 | 2025-06-02 20:39:40.520877 | LOOP [upload-logs : Compress console log and json output] 2025-06-02 20:39:40.578570 | localhost | skipping: Conditional result was False 2025-06-02 20:39:40.583923 | localhost | skipping: Conditional result was False 2025-06-02 20:39:40.596623 | 2025-06-02 20:39:40.596880 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-02 20:39:40.645401 | localhost | skipping: Conditional result was False 2025-06-02 20:39:40.646062 | 2025-06-02 20:39:40.651879 | localhost | skipping: Conditional result was False 2025-06-02 20:39:40.664513 | 2025-06-02 20:39:40.664745 | LOOP [upload-logs : Upload console log and json output]